AI Engineering Podcast

AI Engineering Podcast



This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.

Support the show!

11 November 2024

ML Infrastructure Without The Ops: Simplifying The ML Developer Experience With Runhouse - E40

Rewind 10 seconds
1X
Skip 30 seconds ahead
0:00/0:00

Share on social media:


Summary
Machine learning workflows have long been complex and difficult to operationalize. They are often characterized by a period of research, resulting in an artifact that gets passed to another engineer or team to prepare for running in production. The MLOps category of tools have tried to build a new set of utilities to reduce that friction, but have instead introduced a new barrier at the team and organizational level. Donny Greenberg took the lessons that he learned on the PyTorch team at Meta and created Runhouse. In this episode he explains how, by reducing the number of opinions in the framework, he has also reduced the complexity of moving from development to production for ML systems.


Announcements
  • Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
  • Your host is Tobias Macey and today I'm interviewing Donny Greenberg about Runhouse and the current state of ML infrastructure
Interview
  • Introduction
  • How did you get involved in machine learning?
  • What are the core elements of infrastructure for ML and AI?
    • How has that changed over the past ~5 years?
    • For the past few years the MLOps and data engineering stacks were built and managed separately. How does the current generation of tools and product requirements influence the present and future approach to those domains?
  • There are numerous projects that aim to bridge the complexity gap in running Python and ML code from your laptop up to distributed compute on clouds (e.g. Ray, Metaflow, Dask, Modin, etc.). How do you view the decision process for teams trying to understand which tool(s) to use for managing their ML/AI developer experience?
  • Can you describe what Runhouse is and the story behind it?
    • What are the core problems that you are working to solve?
    • What are the main personas that you are focusing on? (e.g. data scientists, DevOps, data engineers, etc.)
    • How does Runhouse factor into collaboration across skill sets and teams?
  • Can you describe how Runhouse is implemented?
    • How has the focus on developer experience informed the way that you think about the features and interfaces that you include in Runhouse?
  • How do you think about the role of Runhouse in the integration with the AI/ML and data ecosystem?
  • What does the workflow look like for someone building with Runhouse?
  • What is involved in managing the coordination of compute and data locality to reduce networking costs and latencies?
  • What are the most interesting, innovative, or unexpected ways that you have seen Runhouse used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Runhouse?
  • When is Runhouse the wrong choice?
  • What do you have planned for the future of Runhouse?
  • What is your vision for the future of infrastructure and developer experience in ML/AI?
Contact Info
Parting Question
  • From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
Closing Announcements
  • Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
Links
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Share on social media:


Listen in your favorite app:



More options

Here are shows you might like

See show recommendations
Data Engineering Podcast
Tobias Macey
The Python Podcast.__init__
Tobias Macey