Building an ML model is getting easier than ever, but it is still a challenge to get that model in front of the people that you built it for. Baseten is a platform that helps you quickly generate a full stack application powered by your model. You can easily create a web interface and APIs powered by the model you created, or a pre-trained model from their library. In this episode Tuhin Srivastava, co-founder of Basten, explains how the platform empowers data scientists and ML engineers to get their work in production without having to negotiate for help from their application development colleagues.
Announcements
- Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.
- Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!
- Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.
- Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!
- Your host is Tobias Macey and today I’m interviewing Tuhin Srivastava about Baseten, an ML Application Builder for data science and machine learning teams
- Introduction
- How did you get involved in machine learning?
- Can you describe what Baseten is and the story behind it?
- Who are the target users for Baseten and what problems are you solving for them?
- What are some of the typical technical requirements for an application that is powered by a machine learning model?
- In the absence of Baseten, what are some of the common utilities/patterns that teams might rely on?
- What kinds of challenges do teams run into when serving a model in the context of an application?
- There are a number of projects that aim to reduce the overhead of turning a model into a usable product (e.g. Streamlit, Hex, etc.). What is your assessment of the current ecosystem for lowering the barrier to product development for ML and data science teams?
- Can you describe how the Baseten platform is designed?
- How have the design and goals of the project changed or evolved since you started working on it?
- How do you handle sandboxing of arbitrary user-managed code to ensure security and stability of the platform?
- How did you approach the system design to allow for mapping application development paradigms into a structure that was accessible to ML professionals?
- Can you describe the workflow for building an ML powered application?
- What types of models do you support? (e.g. NLP, computer vision, timeseries, deep neural nets vs. linear regression, etc.)
- How do the monitoring requirements shift for these different model types?
- What other challenges are presented by these different model types?
- What are the limitations in size/complexity/operational requirements that you have to impose to ensure a stable platform?
- What is the process for deploying model updates?
- For organizations that are relying on Baseten as a prototyping platform, what are the options for taking a successful application and handing it off to a product team for further customization?
- What are the most interesting, innovative, or unexpected ways that you have seen Baseten used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Baseten?
- When is Baseten the wrong choice?
- What do you have planned for the future of Baseten?
Parting Question
- From your perspective, what is the biggest barrier to adoption of machine learning today?
- Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Baseten
- Gumroad
- scikit-learn
- Tensorflow
- Keras
- Streamlit
- Retool
- Hex
- Kubernetes
- React Monaco
- Huggingface
- Airtable
- Dall-E 2
- GPT-3
- Weights and Biases
Hello, and welcome to The Machine Learning Podcast. The podcast about going from idea of the data science, and the data science is the most important thing to do is to develop data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. We are also developing data science and data science. The ML workflow from pretraining to posttraining and postproduction. No more Excel sheets or ad hoc Python scripts.
Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs while seeing 10 x faster ML iterations. Galileo is offering listeners of the Machine Learning podcast a free 30 day trial and a 30% discount on the product thereafter. This offer is available until August 31st, so go to the machine learning podcast.com/galileo and request a demo today. Do you wish you could use artificial intelligence to drive your business the way big tech does, but don't have a money printer? Graph is a cloud native platform that aims to make the AI of the 1% accessible to the 99%.
Wields the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on graft or to schedule a demo, go to the machine learning podcast.com/graft. That's graft,
[00:01:37] Unknown:
and tell them Tobias sent you. Your host is Tobias Macy. And today, I'm interviewing Tuohin Srivastava about Base 10, an ML application builder for data science and machine learning teams. So, Tuheen, can you start by introducing yourself?
[00:01:50] Unknown:
Yeah. Hey. I'm Tuheen. I'm the CEO and 1 of the cofounders of Base 10. I I have a background in machine learning, have been founded a couple of companies. I've been working on Base 10 since 2019.
[00:02:01] Unknown:
And do you remember how you first got started working in the space of machine learning?
[00:02:05] Unknown:
In 2012? No. 2011, actually, I was working in finance, pretty unhappy in my job. I studied electrical engineering in college and done a bunch of information theory. I was looking to figure out how to get out of finance. And I came across this opportunity to go do research with a neurologist in Boston who was trying to figure out how they can use noninvasive biomarker to track the prog progression of neuromuscular disease. So I went over there, joined this lab company, and, you know, we end up doing a bunch of really cool stuff and around predicting onset of disease and how that disease progressed. You published a bunch of papers, worked on a small company with regards to it, and, you know, I was off to the races.
[00:02:45] Unknown:
And so in terms of what you're building at base 10, can you give a bit of an overview about what you're focused on and some of the story behind how it came to be and why you decided that that was a problem that you wanted to spend your time and energy on? Based on the machine learning application builder for data science and machine learning teams, I think the best way for users to think about
[00:03:04] Unknown:
base 10 is kinda like a toolkit to get your models out of notebooks and into value. We started base 10 really out of, you know, a pain point that we saw for for ourselves is that, you know, we were working in machine learning teams at very, very small companies where we didn't, you know, necessarily have all the resources we needed to bring those models to market. So, you know, as to make it a bit more concrete, you know, we're working at a start up called Gumroad that was working on fraud detection and content moderation. Hence, as a machine learning engineer myself, you know, it took me about, you know, 3 or 4 weeks to, you know, get this great dataset together and come up with my first model. I realized really, really quickly after that was that I'd have to become a full stack engineer to be able to actually get this model to value for the business because we just didn't have the resources to figure out how to deploy that model, how to build the back end as part of that model, how to build the internal tools that operate off the output of the model.
And so, you know, I think back in, you know, 2012 to 2015, I and our team went and basically learned that skill set. So in the absence of those resources, and that was great for me for frankly, but pretty horrible as a investment for the company. And, you know, fast forward to 2018 into that 19, and we're talking to a bunch of our friends who, you know, run machine learning team that do machine learning at larger companies or at, you know, midsized companies. And what we realized was that the results from machine learning efforts and initiatives haven't really lived up to all the hype. We went pretty deep on what we found at least was that there's, like, all these great tools and machine learning operations, like the MLOps space where they make it really easy to, 1, train your model, experiment with your models, and figure out how to get those models, you know, behind some sort of API. What we found was missing was that once that model is behind some API, you'd still need a whole full stack team, which is exactly the problem we ran into. And so, you know, we kind of conditioned upon the set of use cases and tried to see if we could abstract out that toolkit, that kind of product had that skill set that I went and learned or our team went and learned back in the day. And that's really, you know, what base 10 is today.
[00:05:20] Unknown:
In terms of the problems that you're solving, you made it pretty clear that, you know, there's the initial step of, I've got a model and I can put an API in front of it, but then how do you actually build the whole application around it? And so in terms of the solution for that problem, I'm wondering if you can speak to the type of user that you're focused on addressing to fill that requirement and some of the types of organizations that might rely on that capability rather than maybe having their own in house team to productionize these applications around these models or sort of what the typical flow is from, I've got a model and I've got a prototype built with this, you know, full stack app builder in the form of base 10. And then now I actually want to, you know, bring the feature set even further.
[00:06:05] Unknown:
You know, the target users for base 10 are data scientists and machine learning engineers. For us, you know, we've done a lot of work on figuring out where these users and teams sit. And for us, it's usually you know, they work at companies or at teams where they don't have support of machine learning platform team. For us, that makes it super easy because there's only, you know, like, 10 or 15 companies in the world with proper machine learning platform teams. Or, like, the way I would kinda characterize it is, you know, you're part of a scrappy team. You have a machine learnable problem. Do you know what that first version of our model is gonna look like? You know, maybe you have somewhere between 1 and 10 data scientists on your team, but you don't necessarily have, you know, a complete platform team built out to support that data science function.
In terms of, you know, like, where they are from, like, a sophistication perspective, I think that's a good question. I think, you know, for us, it's kinda what I said. Like, they need to have a machine learning problem. Like, what's a really bad target for us is that if a data engineer picks up base 10 because, you know, what we find is that, you know, you're still focused on getting that data in the right form, which is a big problem in itself. You need to kind of be past that stage and more in the I have a model. In terms of, like, the climates from those users, we make 1 assumption of our user, which is, you know, Python, and we lean very, very heavily into that assumption. You know, we find Python to be very, very powerful, but what we try to abstract away is all the infrastructure required to work with Python and to give you a set of other tools that allow you to leverage Python to solve real business problems. So, you know, I'm just gonna maybe this is not the right time for it, but I can just jump into, like, how we do that if that's helpful. You know, firstly, we have kind of, like, 3 pillars to base 10. The first 1 is around model deployment. So, you know, you have a model sitting in your notebook somewhere. Base 10 has a low SDK. You know, you can call from your Python notebook or your Python shell that wraps most types of models. So that's scikit learn TensorFlow Keras and so on and so forth. And, you know, within really, like, a few minutes, you can have a model deployed behind an API and ready to go. But that's really where the value of Bastion starts to come. You know, we think deployment's actually a commodity. We're not super interested in being a machine learning deployment company. That's integration costs for us. We just happen to have an added benefit there. The next step though, is that you need to build some sort of API that sits around that model. So, you know, the inputs and outputs of the model don't really map to the business requirements.
You need to do some preprocessing or postprocessing. You need to write that data somewhere. You need to combine it with some other data. And so we kinda give you this thing that looks like AWS land though, GCP, where you can, you know, write Python code that interacts with that model in base 10. And all of a sudden, you know, you've gone from a model in your notebook to a model behind an API to a model with some pieces of logic around it behind an API. And then the 3rd pillar is kind of like that more front end application building part of it, which is, you know, you have a model let's go back to that fraud example, is that your model says, hey. I am 80, 90% sure that this is fraud. Great. But when your model says 40% sure, 50% sure it's fraud, you want a human to be able to review that. And so base end kind of gives you the low code UI builder to be able to interact with that business logic that I described earlier, but also put together these UIs quite quickly.
[00:09:27] Unknown:
As far as the overall process for being able to build these full stack applications with base 10, as you said, it gives you, you know, a lot of useful tools that are all tightly integrated to be able to manage that end to end flow of, I've got a model. Now I need to build the app around it. In the absence of base 10, what are some of the strategies and utilities and kind of development patterns that teams have built up to be able to solve that problem and some of the pain points that they're experiencing as far as trying to integrate all of those components on their own?
[00:10:00] Unknown:
What folks have come up with is kinda like 2 solutions. 1 is to hire consulting teams, especially when you go to larger companies. You know, they just hire application building teams to sit on top of these ML models. We think that's really bad. I understand the need for it, but the fact that these machine learning teams can be so far removed from the products that they're powering, I think, actually leads to worse outcomes overall. I hit that's approach 1. Approach 2 is just to stitch together a bunch of different solutions. So, you know, like what we've seen is, you know, you would deploy your model with like SageMaker or something on GCP's vertex all of sudden. You then go and spin up a Flask application and run that in in AWS, or you use AWS Lambda or GCP Cloud Functions to interact with that model. And then, you know, you pipe that pipe that to some database, and then you'll, again, either rely on a consulting team or a different product engineering team to build with that data, or you will use something like Streamlit or retool to sit on top of that data. Now I think that is probably the scrappiest, you know, most productive teams that are able to do that. Most teams just get stuck. Data scientists don't really know infrastructure that well. They know infrastructure around models. They shouldn't need to know infrastructure. This is not what their core skill set is. And so as a result, what happens is that, you know, these models just end up somewhere running on local notebooks and and don't really get really beyond that. I think the other approach, which I think the most successful companies have taken, is to really start to work with machine learning platform team. But as I said, that's really a champagne, like, problem where you're doing really well when you have the resources to be able to put together a machine learning platform team.
[00:11:40] Unknown:
In terms of the actual process of running a model in the context of an application beyond just being able to wire together the UI or build the API, what are some of the sort of core technical requirements that are needed to be able to actually support that model as it's operating and ensure that you're able to scale it as usage grows or ensure that you're able to monitor it or manage the integration of the API endpoints into the actual model execution and any sort of, you know, supporting storage or the other sort of technical components that are required just to be able to say, I have a model, I have an application, and everything is running and happy. I feel like you just outlined everything yourself there. But
[00:12:27] Unknown:
in terms of, like, without base 10, you know, the truth is you need, like, a DevOps here. Right? You need folks who know how to configure configure servers, how to manage storage. You know, oftentimes, models can be quite big. So even unpickling them and can, you know, take up too much memory. And so I think without base 10, you end up requiring to do a lot of the same sort of DevOps engineering that you're required with a traditional web app at scale or traditional API at scale. But you have to do it a bit earlier because you run into the memory requirements a bit earlier than you traditionally would. I think, you know, what's interesting is that machine learning models kind of also bring in, like, a second set of considerations around performance of that model. I'd say, like, that performance of that model is probabilistic and not deterministic, which is interesting. And, you know, that just brings a whole new set of monitoring constraints around that model. It's like, okay. Is this model running? Great. How fast is it running? Fantastic.
Now, you know, the harder questions to answer is, like, is this model doing the thing that we want it to do? And the answer to that is probabilistic. You know, it brings up a whole new set of challenges. With base 10, you know, what we try to do is try to abstract all that stuff away. So, you know, our goal is to make, you know, the easy thing super easy, yet make the hard thing still possible. And so for us, what that means is that machine learning teams and data scientists can, you know, deploy a model with a few lines of code, and the default is sensible. You don't need to think about scaling stuff up because we have a auto scaling strategy for your model. Our models don't go down. And when they do, we have code to take care of that for you, and we provide you monitoring dashboards and health dashboards to be able to look at that. That being said, you know, as an engineer myself and someone who's struggled over the years with, you know, dealing with stuff like Heroku, you know, it can get really frustrating when a lot of these platforms just abstract so much away that you guys can't configure anything that you want at all. So we actually provide the knobs necessary for you to be able to change things underneath you if you need more compute, if you need a GPU behind it based on, like, gives you those knobs. But the default is sensible and good enough. And, you know, we've thought about how to stop the color scaling and load balancing, but you don't need to think about that as a data science machine learning team. You mentioned a couple of other projects that are
[00:14:46] Unknown:
in sort of a neighboring space of making it easier for a machine learning engineer or a data science team to be able to go from, I have a hypothesis, I will build a model, and now I just want to be able to wire this up to, you know, prototype something. So you mentioned Streamlit. I know that there's also the HEX platform. There are a few other projects and platforms and systems that are available in the ecosystem to reduce the barrier to entry for somebody who's very heavy in the sort of statistical and machine learning capacity to say, I just want to put this in front of somebody to do something with it. And I'm wondering if you can just give your kind of characterization of the space and maybe some broad categorization of how these tools might be used and applied and where base 10 fits in that overall spectrum.
[00:15:34] Unknown:
From my perspective, at least, you know, the emergence of all these tools has been, you know, truly fantastic. It is amazing that we are creating these tools to make this thing actually very difficult and a lot easier. I'd say that, you know, I am a big fan of streamline and I'm a big fan of HEX. I feel like, you know, just by design, they're more suited to, like, BI use cases as opposed to operation machine learning use cases. And so, you know, what I mean by that is that what we're going after is these real human loop workflows where, you know, we're trying to build the tooling that allows you to combine humans with machine learning models and make decisions.
You know, that is, you know, base, 10, bread, and butter, like fraud detection, content moderation, where there's a model making a decision and you need a human to work with that model to kinda make a better decision. I'd say that, you know, even beyond streamlining the hex, I think products like retail have just been fantastic and really more than anything, just destigmatizing this idea of we're at a point now with software nearing that we can abstract away a lot of these things, especially if you condition on, like, a certain set of use cases. You know, I guess, like, retail is big, maybe, realization with it. Hey. All internal tools actually have, like, 7 or 8 primitives.
In other words, they don't you know, they have a button, they have a form, they have maybe some BI to them. And so I think, like, you know, I'm really excited by all these UI builders that have come up and, you know, whether they come in the form of, you know, streamlet, which is very much like write Python, get a visual app, or hex, which is, you know, kinda streamlet plus, you know, the drag and drop ability to make into a narrative and make it really easy to a narrative. Or retool, which is like, hey. You can just build internal tools, you know, very internal. Great. I'd say, you know, base 10 definitely fits into, you know, 1 side of that, which is, you know, very much that, you know, we make it easy for you to build these user facing applications.
But I think we're also focused on the back end, which I'd say some of these other projects or companies aren't to focus on today. And then maybe they will probably go there at some point, but it's like, you know, with the retool, you still need to set up your own back end. We'll set up your own database that you can query against, but it very much is the front end. Like, With base 10, we allow you to build APIs. I think that, you know, I don't know if there's that many faster ways to execute a blob of Python code behind an API than maze 10. It's literally pasting it into something like building an orchestration graph in a point and click manner, so no Yammer required, and you have an API ready to go. But really what that allows us to do and the intuition with the model side allows us to do is be a lot more focused on workflows as opposed to maybe kind of single use applications or single use dashboards.
[00:18:12] Unknown:
Digging into the base 10 platform itself, can you talk through some of the implementation and architecture and custom engineering that you've had to build to be able to support this workflow of take a model, build an application around it, and put it into production?
[00:18:27] Unknown:
I think a few things. So the first thing I'll start with the model building side model. And so, like, the first thing we did was, like, you know, built this cool way to, like, SDK in Python that you call with your Python client. With 1 line, you can deploy a model. And what we're doing there was actually inspecting your environment, looking kind of like what are the requirements you need and spinning up a Kubernetes cluster somewhere to run that code and then give you an API to access that Kubernetes cluster. Our goal here was to make it so that the amount of custom code per se necessary to be able to build that for the user was as minimal as possible. The second pass part of base 10 is a bit more interesting, right, where it's like very much like replit, but we are running user provided code. So in some isolated environment. And that has all sorts of security concerns as you might imagine. But really, again, what we're doing here is that, you know, you provide us some blob of Python code. You know, we have some pretty intuitive way to assemble that. But, you know, again and you give us requirements to a txt, which might be the requirements to run that code. And, again, we go and create a cluster somewhere with that isolated environment, run your code, and provide you the interface to do that. Like, you know, we have to go through a few rewrites of that because I think the actual core problem of running that code was difficult from an infrastructure perspective.
But the harder problem was, like, what is the user experience for writing code on base 10, and what does that look like? And so, like, what we wanted to do is build a way that you could write code locally. You could push to base 10 as opposed to have to write your code in base 10. And I think while the infrastructure behind is definitely very impressive, like, I think what's more impressive about what we've built is kind of, like, then user interface to writing code so that, you know, you feel like it fits into your developer workflow. I think skipping over to the front end side of things, what's been really interested is like, you know, how do you build the drag and drop tool so that, you know, your data scientists can build a UI without having to know JavaScript. And, you know, that's not purely reactive or display UI itself with event handlers.
So what are the abstractions required? So you can, you know, bind the button to run some code without knowing any JavaScript. That was actually harder than I thought it'd be. I thought this was odd. This is, you know, sounds hard in practice. Like, we can probably, you know, put something out there pretty quickly. Turned out, took us 2 or 3 different rewrites to get it right. But, you know, like, our problems really have or, like, the solutions we've come up with have really ranged from deep infrastructure problems of, like, how do you run, you know, deep models or these blobs of code somewhere and create the API in a performant way to security, to how do you isolate users' courage from 1 another so they can't access it. So how do you protect users from users, and how do you protect base 10 from users? I think these are hard security problems that, you know, spent way too many hours scratching your head against a wall's fault, but, you know, I think we've come up with good solutions. And to them all, what are the attractions required or what are the framework required so a data scientist can build UI without knowing your JavaScript and complex UI? And I think, you know, the range of problems there is actually quite complex and tying them all together in a performing manner so it doesn't feel like 1 jump would nest. You know, that's been even harder, which is like, you know, the UI and the user flow, if you may.
[00:21:38] Unknown:
And you mentioned that you went through a few different iterations of how to tackle this problem. I'm wondering if you can talk through some of the ways that the original design and goals of the project have changed and evolved since you first started working on it to bring you to where you are today? I'd say, like, when we started the company in the end of 2019,
[00:21:57] Unknown:
we weren't really sure where we sat in terms of what we required of our users, and this actually put us in a really bad spot. So, like, oh, let us astray for a second where, you know, for a long time, we were like, oh, you know, the user knows Python, but maybe they don't need to know Python or they need to know a little bit of Python. I think what we realized maybe, like, a year into the project is that actually code's amazing and, like, we shouldn't try to hide cutaway. Like, we are going for a very technical audience who is pretty proficient with Python, and we should lean really heavily into that. And I think that's probably when that was, like, a big almost like paradigm shift for us when we stopped trying to, you know, build for the lowest common denominator, if you may, and instead just we're like, how user knows Python really well. They They can even think about infrastructure. Maybe they don't know the language of infrastructure, but they can't even think about these things completely into that. So that changed even just, like how you wrote code, how you orchestrated, you know, running code, you know, what the APIs look like.
Even for the UI builder, like, what did we require from the user to be able to bind data to a table? You know? Is it okay if they have to write a snippet of code? And I think once we've laid into that, a lot of those, you know, ideas became a lot clearer. And, you know, I kinda like leaned pretty heavily on 1 of our developers, Saran, who's, He does a lot of open source work, and he's the primary maintainer of the React Monaco library, which we use very heavily. But talking to him, I remember asking him, like, how do you decide what should be part of the project and what shouldn't? And he talked about, you know, really early defining the philosophy and the user of your project and, you know, being really, really crystal clear around kind of what your philosophy you know, what your utility function is. You know, for us, that came a bit later than I would have hoped. But once we had that utility function in place, like it became really, really easy to not only prioritize features, but, like, design the software and the underlying abstractions.
[00:23:49] Unknown:
Because of the fact that you do allow for users to submit arbitrary code to customize the behavior of their back end, that always brings in a lot of interesting challenges around security and performance and making sure that somebody doesn't try to either try to bring your platform down or do it inadvertently because they accidentally enter into an infinite loop. And I'm just curious how you handle some of those additional complexities of being able to appropriately sandbox the user's code and then maybe also ensure that you'd provide some limitations and constraints to make sure that you don't end up in these infinite loops or accidental explosion of, performance consumption?
[00:24:27] Unknown:
I can't speak to a lot of that. Beyond my hate, you're gonna have taken it far away from me. No. I can't say that, you know, it's kind of managed in 3 different stages. 1 was from a security perspective, which was that, you know, how do we protect users from other users, how do we protect users from base 10, and how do we protect base 10 from users. You know, we have a lot of customers who give us quite a lot of sensitive data, and, you know, it is very important for us to be able to solve it, and these are the existential problems for base 10 if we get them wrong, frankly. I'd say I think 1 a bit more interesting is the more usability side of it. It's like, how do we protect users from doing, you know, things that will make them end up in infinite, like, in that states or in infinite loops.
I think this all comes down to, like, you know, having, like, sensible time outs, knowing when to stop running code. I think probably most importantly, like, making it really easy to do the right thing as opposed to, you know and making it hard to do the wrong thing. And I think, you know, we spend a lot of time to ensure that, you know, if you do get into a bad state with things that you've not lived, it's likely because, you know, you kinda did it on purpose. So, like, you got there by accident. So, again, just to reiterate, like, from, like, infrastructure perspective, spend a lot of time just because it's so important protecting users and making sure that our current stand scale and from a usability perspective, just making it really easy to do the right thing.
[00:25:49] Unknown:
Predabase is a low code ML platform without low code limits. Built on top of their open source foundations of Ludwig and Horovod, their platform allows you to train state of the art ML and deep learning models on your datasets at scale. The prediabase platform works on text, images, tabular, audio, and multimodal data using their novel compositional model architecture. They allow users to operationalize models on top of the modern data stack through REST and PQL, an extension of SQL that puts predictive power in the hands of data practitioners. Go to the machine learning podcast.com/predabase today to learn more. That's predibas
[00:26:29] Unknown:
e.
[00:26:30] Unknown:
As far as the overall design and interaction patterns for people who are building an application using base 10, I'm wondering what the kind of discovery and design process looked like for figuring out this is the maybe not necessarily optimal workflow, but this is a workflow that is understandable and accessible for our target audience. And some of the, maybe, initial false starts, we'll say, that you ran into as you were starting to say, okay. This is what we think is the right way to go about it, but now we're actually going to put this in front of some of our early design partners, get some feedback, and then figure out, oh, actually, that was not the right way to do it. What led you to the current approach of breaking it into the different kind of core kind of concepts and base components that you ended up with now? I think
[00:27:20] Unknown:
we saw it from the user stories, and we kinda thought about, like, what are all the things that user needs to do. And, again, you know, it really helps being, you know, perceived user of your own product because, you know, we can we could've we thought about it that way. Like, okay. We wanna deploy a model. We wanna build APIs, and we wanna build front ends. And then, you know, we kinda took a 1 simple like, okay. We wanna build APIs in Python. We don't wanna think about AWS. We don't wanna think about Docker. And I think we really decided to, like, start from that top level and kept breaking it down to, you know, more and more, like, lower level stories until it kinda matched into, okay, how would this work for a given product? So what we did for a long time was that we had these 2 or 3 different use cases. We basically every, like, week or 2, we'd have our abstractions, and then we'd basically say, how do we build this top to finish with base 10? And then, honestly, we test that against ourselves for a long time before it felt really right when we started building it. I'd say in terms of customers, you know, we started working with customers probably 4 or 5 months, if not, into building the product. You know, it should have been soon enough, for sure. Like, I think that's something 1 of the things we've been too good before we started. But as soon as we started going to the customer, it became really clear what worked and what wouldn't work. And so, like, you know, as an example, and I alluded to this earlier, you know, 1 of the things that we didn't have before was that you were just writing code directly into this orchestration graph, and then that code would be executed.
You know, 1, you know, feedback we got was, hey. How does this fit into my developer workflow? Like, how does it fit into my local workflow? You know, how does this work with version control? How is this gonna scale? I think, you know, all these things ended up just, you know, changing how we thought about things and really mapping it back to the existing development workflow. Like, to reiterate, like, 1 of the things that helped us a lot as developers and data scientists ourselves, you know, it was really easy, at least, early on, for us to find our North Star in terms of building things by just, you know, engineers are pretty harsh critics of software, as you know. And, you know, we would just ask ourselves, does this pass the bar or something we would choose? And we just kept iterating until we at least hit that, and then then we started going to other design process. And what we found is we've kinda gone through those couple of higher orders of thinking by ourselves. It just made those conversations a lot easier.
[00:29:37] Unknown:
In terms of the actual process of somebody discovering BaseTet and saying, I wanna build an application. Can you just talk through the overall workflow of, you know, taking the model, building the application, wiring the components together, and some of the decision points and design questions that they'll need to answer as they go through that process?
[00:29:58] Unknown:
So to get started, it's super easy. Right? So you pip install base and You configure your API key and you start you deploy your model. And once your model is kind of in base 10, and you can start without a model, we also give you, like I said, pre trained models. We have, like, a lot of buggy based models on base 10 that you can start to play with. But once you have that, just couple of quick to where you could start creating files to orchestrate using that model. You know, we give you, for our models, for the API endpoints you're building around those models, we give you a way to test everything in line before you even have to call it with an API. You know, we give you a way to look at, like, logging in line so that, you know, you can see exactly what's happening start to finish before you have to call it somewhere else. But, really, the idea is that you can go, honestly, from I have a model to deploy a model to I have an API endpoint with some, you know, preprocessing and postprocessing code with Python and that's ready to call from an API.
And, you know, you're off to the races. I think, you know, once you have that API endpoint configured, you know, really it's just going and thinking about what are the application on the front end you need to support that. You know, it's drag and drop. You can, like, pick a table. You can just pick a button. You can pick image gallery, a text input. You can wire it up to those API endpoints, so you could take some input from the user, run it against the model, show the output. But, really, it's a horizontal product with the 3 different pillars, and I think it's, like, worth saying that we had users starting at all different spots. So some people start without a model. And 1 of the customers we're really proud of is Patreon. They started with the data labeling app. So they started very much in the UI builder. But a lot of our customers don't even have UI, and they just deploy the model and start building the apps around it. So it really is kind of pick your own adventure. We have a guided way.
How I think about it and how I use it, you know, which makes sense, which is, you know, starting from your model and then building the workflow and then sorry, the API endpoint, then building kind of the front end applications that sit on top of the endpoints.
[00:31:51] Unknown:
And the UI aspect is always interesting when you throw some sort of, like, design capability in front of engineers because some of them are, you know, very design oriented, and they'll build something that makes sense. And some of them will just throw everything at the wall and say, it does what I want it to do, but then, you know, you throw put it in front of an end user, and they say, my goodness. What have you done? And I'm wondering if you can just talk through some of the guardrails that you've put in place to maybe prevent some of that experience of, you know, just throwing everything on a canvas and then saying, you know, good luck and just helping people think about that kind of end user experience for the application that they're putting together?
[00:32:28] Unknown:
There's nothing to stop you from creating a monstrosity in baseband. It's not even internally. We have some engineers who, you know, build apps. You're like, that's beautiful. And other ones, they're just like, what what is this? But I think what's more important to us is that we're just trying to make it easy to do the right thing as much as possible. So I think, like, right now, we have a very limited template, so that's a very important part of, you know, of our road map, which is, like, creating this template to the UI. If you need something to, you know, test the inputs and outputs of your model for a vision model, You know, we'll have a template for that. If you need something that uploads an audio file and transcribes a transcription model, we'll give you a template for that. If you need something for, you know, content moderation or workflow where you need, like, an inbox type view so you can look at cases, we'll give you something that looks, you know, quite nice, you know, at least from a design perspective. It's simple enough that someone can start to build with it as a template. And I think that is, like, that's a really good way to, like, kinda point people in the right direction. I think, you know, a little less technical, but I think Airtable has done a fantastic job with this. We love their software, which is around, you know, like, if you wanna use Airtable database. Right? And so if you wanna use it for a CRM, they give you a CRM.
If you wanna use it for, like, a recruiting CRM, they give you a recruiting CRM that you could start to play with. And I think it makes it easier. It just takes some of that down burden of those design choice out of the hands of that user. In terms of the model specifically,
[00:33:50] Unknown:
I'm wondering if you can talk to some of the limitations or constraints that you've had to impose as far as the size or scale or complexity of the model or some of the types of models that work better where maybe you do well with a, you know, vision transformer or a, you know, natural language model, but it's more difficult to deploy a, you know, real time computer vision app model? Just some of the ways that you think about the categories of models that people are building and deploying and some of the limitations you've had to impose to make sure that you're able to actually provide a positive experience for your end users?
[00:34:26] Unknown:
Honestly, no. Like, there are obviously limitations. Like, if you told me that if you tried deploying DALL E 2 on base 10, it would be pretty pretty difficult. But, you know, it actually scaled up pretty well, and we've seen people deploy all sorts of stuff from language models to computer visual models just simple linear regressions. Like, we have pretty needed support for all of that. Just as an example, there's a pretty big open source model that's GPT j with 6 billing parameters, which is kind of like a open source version of GPT 3. We were able to get that deployed to be in, you know, in less than an hour. So, you know, like, we do scale up pretty well. I'd say that, like, from, like, a use cases perspective, like, the more real time stuff with things that are very, very large latency for these large models based on helping other great fit for that. But that aside, we really are quite flexible and can handle most types of models.
[00:35:18] Unknown:
The other aspect of using something like base 10 where it's very easy and quick to get something up and running is that a lot of people will rely on it as initial prototyping where you say to your machine learning team, you go ahead and do what you want and tell me when you've got something that's gonna provide value, and then we'll build around it. And for the case where, you know, you build an application, it proves some utility, and then you say, actually, I wanna invest in this further and, you know, build it into the core of our product. What are the either extension points for hooking into base 10 to be able to add additional customizations or the options for being able to take your base 10 application and then export it and, you know, customize it to fit into, you know, whatever frameworks and tool chains you're using for the product development?
[00:36:02] Unknown:
A truthful answer to this question. So the optimistic version of myself would say that, you know, we should be able to scale with our customers, and I think over time, we'll see more and more of that. That being said, you know, there's 3 pillars to base 10, which I've kind of said a few times now. All of them are kind of very modular, so you can kinda pick and choose which ones you want. Like, this is really important to us that hey. Like, maybe you already have a front end team or you wanna maybe use Retool when you're a big user of Retool at your company. Like, why do you have to use BaseSense? So the models have their own APIs. The API endpoints are obviously modular and have their own APIs, and the front end can be used with any back end as well. So all of those aspects, you can switch in and out as necessary. I think, you know, it's a necessity for us to be successful going forward, which is that there are it's very, very cheap to get started with base 10. And what that means is that you don't need to use base 10 for everything to start with, but also there are escape patches out of base 10. So, you know, you can call the different parts of base 10 and benefit from them without having to buy into the whole platform.
[00:37:01] Unknown:
Going back to the model question, you know, 1 of the continuing complexities that people experience as they put their models into production is the challenge of how to monitor it, how to understand what degree of concept drift it's dealing with, when I need to retrain and redeploy the models. And I'm wondering if you can talk through some of the ways that different model types or some of the different frameworks that somebody will use to build a model will influence the integration points that are available for pulling out some of those useful model specific metrics and then what the sort of life cycle management looks like for when somebody says, okay. This model has gone through enough drift that I actually need to retrain it and redeploy it and just being able to
[00:37:42] Unknown:
version the models in the base 10 environment and being able to push them up? Yeah. So we've tried monitoring out of the box. Like, frankly, it's something we haven't invested the most in right now, and we will continue to invest more in it as we go forward. You know, I think it's pretty tricky. I think you've alluded to it, but, like, you really have to condition your monitoring on the type of model. It's very, very different to model, like, something that they're putting a simple probability as opposed to a complete computer vision model. As we invest more on that, we're gonna have to figure out which ones, like, we are best to right now, we have really, really good request level monitoring, and, you know, you can browse your outputs over time to see how they are drifting. You know, as you go over to the next question, though, I think, you know, like, based on has really, really good version management with models.
So, you know, you have a model and you have a new version. It's very, very easy for you to deploy that over the top of it to kind of have some sort of strategy when you when you shadow it, basically, run it in the background for a while, and then when you're happy, it's very, very flexible so you can do these things. Again, like, this building a horizontal tool is very difficult because pick and choose your battles and that's it. Right? Like, with the marketing itself and the version management stuff, we are very much like, I think it's you know, we are still far ahead of what the industry average is, but, like, you know, we have a lot of work to do in terms of making these 2 things really tightly linked to 1 another.
[00:39:07] Unknown:
And so for people who are using Base 10 for building out applications and experimenting with how to put their models into production or maybe just using it as an on ramp to experiment with machine learning as a capability before they go down the process of building their own models. What are some of the most interesting or innovative or unexpected ways that you've seen it used? You know, we've always had a bunch of interest from
[00:39:32] Unknown:
people building crypto things on top of us. It's just the state of the market. We didn't expect that. It's not something that you know, it's not something that we are super privy on. It's cool seeing, like, base end being used in ways that the market is evolving. I think some of the more, like, exciting ways is, you know, people using it for user verification. So, you know, we have a customer. They use it. They have a platform for children to talk to each other, and they try and, like, make sure that only children participate. And so they use it to basically run their participants through a model and try to guess the age of the person. And if they think that this person is an adult, then they had, you know, someone verify it. I think that's super cool. I think, you know, that a European company uses to figure out, you know, the optimal placement of offshore energy sources.
And I think, like, to me, like, these are the more exciting places that, you know, while we hope from base tenant that we're enabling all these, you know, kind of the long tail use cases of machine learning and by just lowering the cost to get things working end to end.
[00:40:36] Unknown:
In your experience of building this business and exploring the different ways that people are using machine learning and the types of applications that they're building around it, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:40:50] Unknown:
The expectations of software as a developer, this is an amazing thing because you have such great software to choose from. It feels like we're still, so early innings there. But what it has also meant is that you truly have to build something that feels intuitive and solves a problem, and that just takes a long grind. You know, like, the reason why I think you see so many companies, you know, have to go and raise large amounts of venture money today is because of, you know, people's expectations of software higher than ever. It's something I underappreciated for sure, you know, a couple years ago, but, you know, I'm realizing more and more. I think what's the most interesting thing that we've learned working on Base 10 is how many ideas people have for machine learning can do for them, either right and wrong.
And I think, you know, that's what makes it so exciting to work with BaseSense is that, you know, ideally, we can
[00:41:41] Unknown:
end up empowering a whole new set of use cases that, you know, we didn't think about. I'm curious. What are some of the ways that you're using base 10 inside of base 10 to build base
[00:41:51] Unknown:
10? 1 thing which is rare in the machine learning software tooling community today. I think there's kind of an open source product and there's some weighted biases for this tool. Not that meant that much software which is easily accessible by developers, sign up and use it, and that was really important to us. So, you know, at the same time, we're kind of running the business of reselling compute. Right? Like, you can run off great code. And so, you know, when a user signs up, when it we try to figure out, like, likelihood of this person doing malicious things as quickly as possible while trying not to, you know, ruin that person's parents. And that's a base 10 app that, the moderation queue, it's it post back to Slack, the heuristics and the model, and the actions are all deployed using base 10. That's pretty cool.
[00:42:35] Unknown:
And so for people who are interested in being able to just get started with using machine learning, either with 1 of your pretrained models and they just wanna build an app around it and see what's possible, or they have a model and they just wanna get an app up and running? What is the cases where base 10 might be the wrong choice?
[00:42:51] Unknown:
I think when you go to, like, super real time use cases, like, based on probably just in the right choice, like, internal build today is better. We'll get there. Which is not there today? I'd say that, you know, when you have more optimization models as opposed to creation models, you know, based on long break choice. And then I'd say if inside companies, if you care a lot about they've been on prem, you know, based on to pull up that. But it's, like, it's more painful for us and more painful for the customer. You know? Like, it will be more painful than if you just use a cloud offering. So I'd say, you know, there are 3 cases of web based, and it's probably not we haven't dealt with those things as our number 1 priority for where we are today. They are all possible,
[00:43:37] Unknown:
but higher order questions that you need to answer before using base 10 for those things. Now that you have launched and your product is available for people to start onboarding and experimenting with, what are some of the things you have planned for the near to medium term or any areas of focus that you're excited to dig into?
[00:43:55] Unknown:
Yeah. I think the biggest 1 is, like, the entire integration with developer workflow. So can you write it in your Versus code and there's, like, a little base engine extension that deploy that, you know, we're gonna be open sourcing a bunch of stuff we've worked on, which I'm super excited about. But right now, you know, like, I think over the next, like, 12 months, you know, we're just really focused on getting people value. And I think, you know, whatever that entails, whether that means more models where we're really investing in that, whether that means, you know, investing in monitoring, We're really excited about that, but I think, you know, the thing that comes to mind straight away is, like, how do we get even more embedded into the developer workflow today?
[00:44:33] Unknown:
Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as a final question, I'd like to get your perspective on what you see as being the biggest barrier to adoption for machine learning today.
[00:44:47] Unknown:
Really just getting buy in from all the stakeholders. I think today, like, machine learning has has, like, this dual class status of having a ton of hype around it and also having a lot of skepticism around it. And I think, you know, what's important to adoption today, I think, is just educating people about, you know, what's possible in the state of the art, where realistically most organizations are, and what it'll take for those things to converge over time. And I think, you know, that just will require, you know, more success stories, more failure stories, and just making it driving down the cost data rate, which I hope I contribute to in some way in the fall.
[00:45:23] Unknown:
Absolutely. Well, thank you very much for taking the time today to join me and share the work that you've been doing at Base 10. It's definitely a very interesting platform, definitely reduces the time and energy required to invest in getting an application up and running to prove out ideas. So appreciate all of the time and energy that you and your team have put into that, and I hope you enjoy the rest of your day. Thank you so much. I appreciate you having me on the show.
[00:45:51] Unknown:
Thank you for listening. And don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest in modern data Python language, its community, and the innovative ways it is being used. You can visit the site at the machine learning podcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction to The Machine Learning Podcast
Interview with Tuohin Srivastava: Introduction and Background
Overview of Base 10 and Its Purpose
Target Users and Use Cases for Base 10
Challenges in Building Full-Stack ML Applications
Technical Requirements for Supporting ML Models
Implementation and Architecture of Base 10
Evolution of Base 10's Design and Goals
Security and Performance Challenges
Design and Interaction Patterns for Base 10
UI Design Considerations and Templates
Model Limitations and Constraints
Monitoring and Lifecycle Management of Models
Innovative Uses of Base 10
Lessons Learned in Building Base 10
Using Base 10 Internally
When Base 10 Might Not Be the Right Choice
Future Plans and Areas of Focus
Biggest Barriers to ML Adoption
Closing Remarks