Summary
In this episode of the AI Engineering Podcast, Tanner Burson, VP of Engineering at Prismatic, talks about the evolving impact of generative AI on software developers. Tanner shares his insights from engineering leadership and data engineering initiatives, discussing how AI is blurring the lines of developer roles and the strategic value of AI in software development. He explores the current landscape of AI tools, such as GitHub's Copilot, and their influence on productivity and workflow, while also touching on the challenges and opportunities presented by AI in code generation, review, and tooling. Tanner emphasizes the need for human oversight to maintain code quality and security, and offers his thoughts on the future of AI in development, the importance of balancing innovation with practicality, and the evolving role of engineers in an AI-driven landscape.
Announcements
Parting Question
In this episode of the AI Engineering Podcast, Tanner Burson, VP of Engineering at Prismatic, talks about the evolving impact of generative AI on software developers. Tanner shares his insights from engineering leadership and data engineering initiatives, discussing how AI is blurring the lines of developer roles and the strategic value of AI in software development. He explores the current landscape of AI tools, such as GitHub's Copilot, and their influence on productivity and workflow, while also touching on the challenges and opportunities presented by AI in code generation, review, and tooling. Tanner emphasizes the need for human oversight to maintain code quality and security, and offers his thoughts on the future of AI in development, the importance of balancing innovation with practicality, and the evolving role of engineers in an AI-driven landscape.
Announcements
- Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
- Your host is Tobias Macey and today I'm interviewing Tanner Burson about the impact of generative AI on software developers
- Introduction
- How did you get involved in machine learning?
- Can you describe what types of roles and work you consider encompassed by the term "developers" for the purpose of this conversation?
- How does your work at Prismatic give you visibility and insight into the effects of AI on developers and their work?
- There have been many competing narratives about AI and how much of the software development process it is capable of encompassing. What is your top-level view on what the long-term impact on the job prospects of software developers will be as a result of generative AI?
- There are many obvious examples of utilities powered by generative AI that are focused on software development. What do you see as the categories or specific tools that are most impactful to the development cycle?
- In what ways do you find familiarity with/understanding of LLM internals useful when applying them to development processes?
- As an engineering leader, how are you evaluating and guiding your team on the use of AI powered tools?
- What are some of the risks that you are guarding against as a result of AI in the development process?
- What are the most interesting, innovative, or unexpected ways that you have seen AI used in the development process?
- What are the most interesting, unexpected, or challenging lessons that you have learned while using AI for software development?
- When is AI the wrong choice for a developer?
- What are your projections for the near to medium term impact on the developer experience as a result of generative AI?
Parting Question
- From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
- Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
- Prismatic
- Google AI Development announcement
- Tabnine
- GitHub Copilot
- Plandex
- OpenAI API
- Amazon Q
- Ollama
- Huggingface Transformers
- Anthropic
- Langchain
- Llamaindex
- Haystack
- Llama 3.2
- Qwen2.5-Coder
[00:00:05]
Tobias Macey:
Hello, and welcome to the AI Engineering podcast, your guide to the fast moving world of building scalable and maintainable AI systems. Your host is Tobias Macy, and today I'm interviewing Tanner Berson about the impact of generative AI on software developers. So, Tanner, can you start by introducing yourself?
[00:00:28] Tanner Burson:
Yeah. I am VP of engineering at Prismatic. We're a b to b SaaS company focused on productized integrations. So we, have a product and platform that allows our customers to integrate integrations into their product, making everything from onboarding
[00:00:47] Tobias Macey:
to data manipulation for their customers easier. And do you remember how you first got started working in the ML and AI space?
[00:00:54] Tanner Burson:
Yeah. So I I wouldn't say I am primarily, in the machine learning or or AI space for sure. I've been an engineering leader of varying sorts for for more than a decade, and I've often found myself managing or leading initiatives that involve data engineering practices and and those sorts of things ranging from, you know, classic just we have a pile of data and we need to distill it down to something smaller and usable to some more complex predictive modeling of sales or inventory data or problems like that. Also have done some work leading some kind of speculative initiatives on the kind of strategic value of AI within, within our products and within within the developer base. Digging now into the topic at hand, can you start by describing the types of roles and work that you consider encompassed by the term developers so that we can scope this conversation? Yeah. It's a it's a great starting point. I think it's always been a blurry line, and I think that line has only gotten blurrier over time. I I think things like, you know, this discussion or an AI tooling and developers starts to to even make that line harder to to discern. But, typically, when I'm talking about developers, I'm largely talking about people whose primary job is writing code to create software, which seems maybe broad or maybe narrow depending on on your viewpoint, but that's that's the starting point I usually work from.
[00:02:19] Tobias Macey:
Before we get too deep into the topic too, you mentioned that you've been an engineering leader for a long time. You're currently the VP of engineering for Prismatic. I'm wondering, given your role at Prismatic and the fact that it is a company that is also focused on enabling other engineers and teams to manage their software flows, what type of visibility that gives you into the effects that AI is having on developers and their work and some of the insights that that has illuminated in in the work that you're doing? Yeah. I think it's certainly, one of the things I think is very interesting,
[00:02:54] Tanner Burson:
about this product and company at this particular time and place. I mean, at a at a starting point, we are a software company. So first and foremost, like, we build software. So how this works with our teams, how we deal with this stuff day to day is certainly certainly something we we have to deal with. But as you noted, we work a lot with other developers and product leaders at other companies, and we see the things that they are building both for themselves and for their customers. And this gives us an interesting view into where, where they're spending their time, where their focus is, and where that's going.
Insights at this point are are hard to gauge. I think we're still seeing, certainly a lot of activity around AI integrations and how to get the right kinds of data into third party AI platforms and those things. But I I wouldn't say there's yet been a real concrete trend of this is exactly what everybody is doing other than everyone is doing a lot of experimentation. I don't think we've seen anybody coming in with a really strong viewpoint that they have exactly the right answer and exactly the right the right process yet. But a lot of our customers are are experimenting with AI and how it fits into their products and how it fits into their workflows. And so we see a lot of different AI vendors being used for integrations and a lot of different kinds of data and data flows going through there. But I don't think there's yet been a pattern other than the pattern of lots of experimentation.
[00:04:12] Tobias Macey:
One of the overall narratives that has been going back and forth over the past definitely the past year, maybe the past 2 to 3, depending on how you want to frame it, is the tension between the AI maximalists and the AI skeptics where you have some famous announcements from, I think it was the CTO at AWS saying that AI is going to put all the engineers out of a job where we don't need any more engineers. We're just gonna let AI do all the work. And then you have some very prominent and senior engineers who are saying, I've tried to use AI to do my work, and it's garbage, and I have to correct it. And it actually takes me more time to use it than if I just did it myself. And I'm interested in, at the 30,000 foot view, what you see as some of that long term impact on the job prospects for software developers and some of the ways that you see them being impacted at the career level from these generative AI capabilities.
[00:05:09] Tanner Burson:
Yeah. I think this is probably the more interesting, angle to take on the the kind of AI maximalist versus AI skeptic, angle. I think the AWS view, is an incredibly self serving one to attempt to go sell more AI product. I think it's interesting just in the last couple of weeks, Google had I don't remember who it was at Google, but it was a relatively high up, engineering leader had said, you know, with their in house custom trained AI tooling for for their coding, they had reached 25% of their code being developed via AI. My experience tends to be that if you're hitting 25%, then you've hit the least interesting boilerplate level of your software. So I I think that's an interesting data point that Google isn't saying this has taken over large portions of what they're doing, and and that's that's interesting. I think when you start to put the angle of what does this mean for people's job prospects going forward and where does this where does this where does this really lead? I start with a simple premise that I don't believe there will be less software tomorrow than there is today. I think we are only going to create more software. I also think we will still need more software engineers to create and manage all of the software that's that's being created. So I don't expect the the kind of AWS view there of, all software engineers will be replaced by AI. Maybe the line of growth changes, and we've seen that in the past. I think if you were to look at the growth rate of hiring in in software over time, it it's certainly not just up into the right forever. I'm sure we will see some some changes there, but I I don't expect we're gonna see a a a massive retraction in what's there. I do think today is a really hard day to be a new software engineering or comp sci graduate out looking for a job. And I I I think there's going to be a lot of soul searching and struggling at a lot of companies to figure out what the right balance of more junior engineering talent in their organizations will be. That said, I think it's incredibly important that we continue to to hire and build the next generation of of software engineers. If you run from the AWS premise, then at some point, we just run out of software engineers entirely, because they will all eventually age out of, age out of the the career in the workforce. So I think we need to continue to to build the, the next generation of software engineers and figure out how to make them as effective as today's engineers are.
[00:07:24] Tobias Macey:
There is an interesting semantic debate to be had as well between the ways that we say software engineer versus coder and the ways that we think about the impact across that divide where, to some approximation, if you say I am a coder, then that just means that you're maybe more of a junior level person. You just write a bunch of 4 loops and you do exactly what you're told versus a software engineer where you're maybe thinking more at the systems level trying to translate the product requirements and requests into the technical architecture and how that is going to actually be implemented in code, where if you want to use that kind of Boolean approximation, then maybe software engineering becomes more valuable because the AIs are not to the point where they can actually translate user requirements into a systems architecture and understand at a broad reaching level what that means across the various different systems modules.
Whereas if it's just, I want to write a for loop, that is very much where the AIs are capable. They are gaining a little bit more broad reaching, you know, multi file mutation capabilities, but we're not really there yet. And I don't think that maybe we'll get to a point in the next few years where we can say, I want to have an application that does x, y, and z, is able to scale from 0 to infinity, etcetera, etcetera, and it will do all of your systems design, DevOps, etcetera, but we're a ways away from that. And to your point too about the prospects of junior engineers, I think there was an interesting conversation I had with one of the founders of Tap 9 where his impression was that it maybe actually enables junior engineers a bit more and allows you to scale your software engineers better because the AI becomes the pair programmer and encapsulates more of the knowledge of that senior engineer so that the senior engineer is able to support a larger number of more junior engineers in their work. Yeah. That that's an interesting view I haven't heard.
[00:09:27] Tanner Burson:
I think if I were a junior engineer today, that's certainly an angle I would I would argue for that I I can be more productive, and and more capable than I could have been with without these tools. I think when you talk to more senior engineers, they often describe some of the current generation of AI dev tooling as like pair programming with the junior engineer. You you have to assume that it doesn't have all of the context all of the time that it may be missing pieces, that are important. It may be misremembering key details or inventing things that don't yet exist. So I think it's interesting which side of that fence you sit on. You view the tool, as providing the other side of the value. Junior engineers believe it, it helps them behave more senior, but senior engineers see it as more of a substitute for a junior engineer.
And I think there's probably pieces of both of those stories that are very true today.
[00:10:21] Tobias Macey:
Digging now into some of the actual workflow impact and the ways that AI is impacting the day to day of engineers as they conduct their work in the present. There are a lot of different utilities, maybe the most broadly known one being GitHub's Copilot and various approximations of that. And I'm wondering what you see as the general categories of tools that are most impactful, both positive or negative, and then maybe that are most useful in that development cycle.
[00:10:55] Tanner Burson:
I think it's I think it's interesting because I don't I don't think we have the tools that will end up defining this category yet. I think the the current batch of, you know, copilot tools, which everybody seems to have branded something as copilot, today, are fine. The they they serve, you know, they they serve a purpose, but they're not particularly exciting or awe inspiring. I the things where it will get interesting is probably years from now as we're able to get more of your product data into the AI system and closer to real time. Those tools are going to get really interesting. Today, as far as the tools that most people are adopting, it's copilot tools, it's cogenerators, it it's things of that nature.
And I honestly haven't seen a huge a huge positive impact from most of those tools. The the ones I have seen and the ones I've seen people adopting help with the easy work to go back to the kind of Google 25% example of if you need to make a go type or class that will map some JSON data. I'm sure it it's going to generate that way faster than you could type those characters out, but that's that's not particularly exciting or or challenging work. And in a lot of cases, I have seen it consume more of people's time and energy than it has saved. I think code reviews become even more important in a world where a team is more reliant on AI generated code.
You can no longer assume that the person writing the code has context beyond the lines of code that are presented because many of the AI tools aren't able to gather enough context to to look at the the scope and scale of things. Things like performance issues, scale issues, are really hard for these tools to understand and to reason about inside of a more mature code base today. So code review has to has to really step up and make sure that the problem being presented is being solved not just at a line by line level, but as it fits into the broader system.
[00:13:10] Tobias Macey:
The other interesting application that is starting to become a little bit more available is the use of AI for an initial pass of code reviews or pull request reviews. I know GitHub, I think, recently released something along those lines. Tab 9 recently released something to that effect. And I I haven't gotten a lot of personal experience with that yet, but that's another one of those things where it seems like it could cut both ways where maybe it saves time because it gives a good first pass of, oh, I didn't think about that. Maybe I need to adjust this here. But it could also just end up being a waste of time if it doesn't have enough understanding about the reasoning behind certain design choices of the code. And so maybe you're just going to be fighting against it because it says, oh, you shouldn't do this. You should do it this way. And you say, well, I I know why I did it this way, so just leave me alone.
[00:14:08] Tanner Burson:
Yeah. Absolutely agree. I think I would probably be more excited about tools like that if I were on a very small team, 2 or 3 developers working on a brand new product, maybe something like a mobile app where the the scope of interaction is more more knowable and more contained, and just having the the sense that there is something else helping kind of review and push us forward. I I think there's opportunity in places like that for tools like that to be more valuable, but I tend to agree that in a larger team and in a larger product, they're as likely today to add, to add noise in additional cycles as they are to really, streamline your workflow in any meaningful way.
[00:14:54] Tobias Macey:
Another interesting tool that I have seen, but, again, not experimented with extensively, is something called Plandex, where what it offers is a way for you to add multiple different files in a repo into its context and then, say, you know, create a plan of what you want to do across those files, and then it will generate that plan and give you a way to refine it, which for things like broad reaching refactorings, like, I want to move this module from this location to this location and make sure that all of the references to it are updated appropriately, or I want to rename this class or change the signature and do that a little bit more broad reaching refactoring where maybe I need to refactor it across both the core library and all of the different applications that are consuming it. But, again, there there is that cycle of you don't know how accurate it's going to be, and so there has to be a lot of human supervision, which maybe it speeds things up, but I think it takes a little while to get to that point of level of comfort and understanding of how the tool operates to be able to feel secure in allowing it to make those changes.
[00:16:07] Tanner Burson:
I think it's one of the things I find most challenging with a lot of the AI tools today. And I think that tool sounds very interesting, and I it's not one I'm I'm familiar with. I could actually imagine a lot of use cases for things like Terraform where there's just a lot of manual manipulation of, of text as you make relatively small changes across across things. But your your point about workflow and being comfortable with it, I think, is a really key thing that I don't think has really been nailed yet in a lot of the AI dev tools. Dev tools are such a personal choice for everyone to the point that we have the kind of infamous, you know, the editor wars of Vim versus emacs that have been going for 40 years or, you know, things like that where people are very attached to the tools and the the choices that they've made and what tools they've selected and how they're configured and how they work.
And people get very attached to the workflows and behaviors that those tools allow them to to create. And I don't think the AI tools have yet reached that level for most people. I I think there is still a bit of a discomfort and still a bit of a distance between them and a lot of these tools. I think there are some folks who have spent enough time and effort to to feel like they fully adopted them and understand how to fit them in. But most of the engineers I've met and talked with are still still evaluating how it fits into their kind of preferred and ideal ideal workflows.
[00:17:38] Tobias Macey:
And that point of comfort and understanding, I think, is really a big piece of why people get very attached to their specific tool chains and workflows because they really grow to understand it at a deep level. And LLMs, to a large extent, are still generally a black box of you put something in, you get something out. You maybe have a vague understanding of what the mapping between input and output is, but you're not guaranteed to get the same output every time. And I'm wondering what you see as the necessity of building understanding of LLM internals, how they're built, how they operate, and how that maps to the level of comfort in letting the tool have more free rein in your workflow and do larger and larger pieces of work for you.
[00:18:32] Tanner Burson:
Yeah. I think most engineers want to understand their tools. And so I think a lot of folks are interested in in figuring this out, or they're uninterested in figuring it out, and they're less interested in the tool because of it. I think there is a ton of opportunity to for for folks to understand these tools better and to be more comfortable with where their their strengths and weaknesses are. I think the unfortunate nature is the the level of complexity and depth of these is something that most folks struggle to want to get their head around, to act as a dev tool.
It's the level of complexity of learning how core operating system fundamentals work and core, processor fundamentals and things at the the very lowest levels of the stack for most folks in terms of the complexity scale. And that's a lot to wanna take on to refactor your class better, to to generate better boilerplate to adapt, what you're doing a little better. And so I don't think a lot of, engineers and I I would put myself in that camp of not having spent as much time as I could going deep in that. I think there's another challenge there though, which is most people aren't using an LLM directly. They're using the open a AI API or they're using GitHub's wrapper around it or they're using, Amazon queue. I assume somebody actually uses it.
And you don't it's even more of a black box than an LLM is on its own. You know, you don't know fully what the system prompts are that are going into those, just from a prompt perspective. You don't know what additional adjustments they're making to the input or the output that's coming through. You You don't know how many times it's rejecting, output before selecting one that it decides is, is sufficient given some other set of constraints. And so there's a ton of of hidden important details, and I think engineers hate not knowing what's going on behind that curtain for something they think is very important. And it's unlikely anytime soon we're going to have a lot of visibility into that into that API layer.
[00:20:44] Tobias Macey:
I think that point on open API is also an interesting thing to bring up in that from the bits of exploration and tinkering that I've done personally, it seems that a substantial portion of the ecosystem of developer tooling with some sort of AI as the driving force has started to it was at least initiated around that monoculture of presuming that the OpenAI API was the thing that you were interacting with. And there has been a substantial amount of effort in the development and production and release of these more open models, more open ways to execute those models, so things like Ollama, the Hugging Face transformers, etcetera.
But most of those tools are still presuming that the Open AI API is the thing that you're talking to. And so all of those other providers have implemented that API contract at some layer as a facade, which I do think still adds that extra bit of friction of maybe I don't want to use OpenAI. I want to use the model that's running on my laptop, but now I have to figure out what are the assumptions that this tool has made about the OpenAI API and how it's interacting with it for me to be able to translate that to the facade and get it to work happily. And I'm curious what you have seen from your own experience of working with your team and your customers about how they're maybe navigating that friction.
[00:22:20] Tanner Burson:
I I think they're navigating it as best they can, which is the way everybody is doing it. I think you're right that there there is not a a standard for interfacing with with a lot of these models and engines at this point, But I do think there's become a bit of a de facto standard of of the open API or open AI, API interface. I guess, on one level that gives you some standard to to aim at, but I agree that it it is probably long term limiting the the other capabilities and and other choices that folks have. I think we've seen we've certainly seen customers implementing against OpenAI directly, and we've certainly seen a fair number of other, APIs that are are mimicking that as best they can, including in the kind of local tooling space of of trying to create proxies and wrappers around those to, to make them behave the same.
But, yeah, I think it'll be interesting to see in the next few years how that API surface changes and whether any of the other, either local tools, like the Ollama, set of tools or third parties, whether it's Anthropic or or Google, managed to build an interface that is more compelling and interesting and pushes that pushes that forward.
[00:23:36] Tobias Macey:
Another piece of this too is that there are a lot of off the shelf tools that are focused on different areas of development. They make different promises about how they're going to influence and accelerate the way that you work. There are also all of the component parts to be able to put together your own tools in the form of things like langchain, mama index, Haystack, where you can build your own AI stack from whole cloth and make it work the way that you want it to, which, again, requires a lot of exploration and experimentation. But I imagine that once an engineer has gone through that effort of building something that suits their workflow, does exactly what they want in the way that they want it, then maybe it will actually accelerate their work more than if they're trying to use one of these off the shelf utilities.
And I'm wondering if you have seen any movement in your experience of working with your team and your customers of people who are going down that path of investing in tool development where an AI is a component of that workflow, but they are making it more bespoke than what are being produced as a generally available and generally applicable toolset?
[00:24:54] Tanner Burson:
It's a really interesting question. I I can't say I know none of the teams I have, I have directly worked with have invested the time in building their own AI based dev tooling. I've seen some small experimentation in that direction, but it didn't it didn't result in something that was that was ultimately usable. You also hit on something that I think is interesting right now in particular, and talk about, you know, an individual developer doing this to to improve their workflow. And I think that's probably happening in places that we're just not seeing because what what choices individuals make on their tooling and their dev is constant, and it it's a constant shifting landscape.
But that's a large investment for 1 individual. And I'll be interested to see whether that repeats. Like, does this get to a place where an individual developer is able to easily build these things for themselves? Like, we've seen teams attempt to do this, but those tend to be larger companies where they have, you know, existing internal developer experience teams or internal platform teams who can make this part of their kind of internal productized platform. But I haven't seen that as much at our size and even a lot of our customers' size with that being a major, a major focus yet. Most of those AI initiatives are more focused on their product. And so if they're building out internal AI infrastructure, it's about how that can serve their product, not necessarily their internal developer tooling.
[00:26:27] Tobias Macey:
I think too that one of the reasons that that hasn't become as widespread is because we're still very early in this process. All of the tooling and libraries are still very much in flux. There are constant shifts in the capabilities of the models themselves, and I'm interested to see how the landscape develops of maturing some of those building blocks so that they're a little bit easier to snap together like Legos rather than just, here's a chunk of wood, here's a lathe, good luck.
[00:27:02] Tanner Burson:
Yeah. And at at this point, it doesn't even feel like, we're we're being handed lathes necessarily. You you get a a chunk of wood and a sharp piece of metal and and told to figure out how to spin it up to speed so that you can get it, whittled down. So there's there's obviously tons of opportunity there to continue to, continue to build the to your point, the Legos, the building blocks that will allow this to be a more composable set of tools than it is today. They're relatively monolithic tools today. I one of the other challenges will also just be training. I think the the training process, the amount of data, the time, the compute that's there, is still very expensive on all of the dimensions of, you know, time and compute both are are are expensive even for something as as small as a code base.
And so I think we're gonna need to continue to find better ways to to manage those processes and those tools to get to a point that you can have something more individualized than what the, kind of prepackaged third party solutions look like right now.
[00:28:05] Tobias Macey:
Yeah. I I do think that that is another interesting aspect of this is that if you do want to do anything fairly sophisticated with these AI models, you need a pretty decent set of hardware to be able to execute it on even if you're just doing it on your local laptop. You typically still need some sort of GPU with a decent amount of VRAM available to it, and not everybody has that. There there has been movement more towards building some of these smaller models. The llama 3.2, I know, has a fairly small model, which is designed to be able to be run more on CPU hardware. I think the Quen 2.5 Coder model just launched.
I just read about that this morning as we're recording this, and that one has a 0,500,000,000 parameter model. So that's fairly small as far as things go. So I do think that some of these smaller models with better tooling around the context creation will maybe improve the ability for individuals to do this on their own. But, yeah, the the hardware requirement is definitely a pretty significant constraint still.
[00:29:12] Tanner Burson:
It'll be interesting to see how the smaller models behave in the use cases that people care about. The the impression thus far has been that the the bigger the model, the more tokens, it it can accept, the better the outcome will be for the the software use cases. It'll be interesting to see as some of these, lighter models come together, whether they're able to be, tuned in ways that are just as valuable on the software side even given the the the smaller scope.
[00:29:47] Tobias Macey:
Another interesting and useful angle on this overall problem is so for individual developers, they have to do their own decisions about what is useful. Is this actually improving my productivity, or is it wasting my time? But speaking as an engineering leader, there is also an equal importance on understanding the landscape, understanding the impacts, both positive and negative. And I'm wondering how you are evaluating and offering guidance to your team on the ways that they are incorporating and using these AI powered tools.
[00:30:27] Tanner Burson:
It's a really interesting challenge at this point, because a a given team will have the full breadth of AI enthusiasm within it. You will have the the engineer at one end who's, you know, built a machine at home so that they can go train and build and manage their own models, and they're they're super deep in it. And at the other end, you have, folks who just wanna use them with their 2 color syntax highlight theme, and that's it. And they they're not interested in in going through there. And so there's certainly a balancing act in figuring out how to make sure that everybody, regardless of their enthusiasm for it, is is able to to be productive.
In my case, you know, the the kind of baseline is we enable the copilot style tools, whether it's through GitHub or or other services that we subscribe to. Let our engineers use those in whatever ways they they think are fit. Some enable them for everything. Some have them turned off. Some, have them in kind of chat only mode where they can poke it when they need something, but, otherwise, it stays out of their way. But just like any other tool that we look at adopting across the team, we have to talk about it as a team and talk about what's working, and what's not. Decide whether there's alternatives we wanna investigate. We have, you know, security and compliance policies and what kinds of tools we can adopt.
That absolutely applies here. So we have to be conscious of what the what the requirements and and capabilities of those tools are, how they fit into our broader, compliance standpoint. And then lastly, you know, the the other bit is, you know, cost of these tools is not, is not free. And so we've gotta balance the the the net gain in terms of productivity against what the the actual cost is for the tools, and that that's something, you know, we have to continue to to pay attention to. On a, you know, personal level from the leadership side, I read about a ton of this stuff. I talk to a lot of my peers in other organizations, talk about what what they're seeing, what their teams have found effective, what's not, and use that as part of the discussion and and things we have, have internally. But like anything else, there's no one true answer. It's a it's a constant, back and forth and constant, evaluation of whether there's, new ways we can we can be more effective.
[00:32:54] Tobias Macey:
Absolutely. And on that note of compliance, there is also the subtext of risk where that can take the form of risk of, corporate secrets and private code getting leaked depending on the tool chain that you're using and the ways that they manage the collection and storage of data that is submitted to them. And then there is also a risk in the scope of you are relying on and trusting this AI to give you code that is going to go into production and serve your customers, and maybe it introduces security vulnerabilities or other types of bugs. And so it actually ends up being a net negative to your overall team productivity.
And I'm curious how you're thinking about that management and mitigation of risk in the use of these AI tools.
[00:33:46] Tanner Burson:
Yeah. You you nailed the 2 risks that are are probably highest on my mind. One is just the security of, of the service since, as we mentioned, it's hard for a lot of, individuals and teams to build these things in house and to have fully, in house trained infrastructure. You're relying on a third party who will have access to, you know, some amount of your code, potentially data, and in some cases, may involve running, you know, software they provide, whether it's just a CLI wrapper or something similar, on on developer laptops, which are relatively high secure environments in terms of what they're capable of and what they see. So that's that's certainly the top of my list of risks. So much like any other vendor we would adopt, we have to go through a security and compliance review, make sure that their their policies and standards meet ours. And then, you know, give it the the eye test of, like, do we believe that this is a a vendor that is taking us seriously and that we would want to partner with for for these sorts of use cases?
That's, far away, the the first and foremost, the thing we we have to pay attention to. I think the harder one, though, is the the kind of code quality, risk that you were describing of the the introduction of subtle bugs or or challenges. I'm hitting back on something we talked about earlier. I guess this is where code review has to really play a huge role and has to evolve. I think teams have to look at code differently. They have to pay attention to the code differently. I think in in teams that may be adopting the some of the AI tooling more completely than, certainly I am, but, many many are those code reviews may need to happen earlier in the process.
I certainly treat the code generated by an AI tool as code I must immediately review, and believe before it goes into even a branch or a code review for another engineer to see. And so I think we have to pay attention to that at at a much deeper level. I know with my team, we have mandatory code reviews for everything that we look at. Most teams I've worked with in the past have had had similar behaviors. And I think with the addition of more, third party, added code via AI or or other means, those code reviews become even more important in ensuring that the team is trained, prepared, and understand how to really dig into concerns like security, like performance, like scale, and identify those issues and resolve them before they, before they're pushed live is is even more important.
[00:36:19] Tobias Macey:
Another interesting aspect of the code review problem too is that because you are able to generate a lot more code faster, whether or not it's good is subjective. But because of the fact that you can say, I want you to write this whole class and then implement usage of that class and then refactor this thing over here, your pull request has the potential to explode by maybe an order of magnitude in terms of the number of changes that are included in one change set, which increases the burden on the human doing the review because there is a lot more code for them to have to look through. So I I can see that as being a a net negative to team productivity because while you're producing more code and maybe that is a benefit to productivity, it means that the person doing the review has to take a much longer time to look through it all, especially if they know or suspect that a substantial portion of that was generated by an LLM.
[00:37:20] Tanner Burson:
Yeah. There's certainly potential for very negative feedback loops, in in this sort of process, and it's something it's something to pay attention to. And it's it's something I I also think highlights part of why I am not convinced of the the AWS argument that, you know, software will be replaced by AI. The ability to go understand this, the ability to construct the change sets in a way that is meaningful and manageable and and representative of what's going on is currently, and at least based on the the the current set of tools, a human only set of interactions.
I think there's still a long, long road before the the the kind of code generated AI is able to to incorporate that sort of concern and that sort of, attention to to the code that it's creating and the the process of getting that code created.
[00:38:16] Tobias Macey:
I I think another interesting aspect of risk as well is the personal risk to the engineers who start to use the AI as a crutch of I know that this is directionally what I want. Go ahead and generate it for me, and then maybe I'll go through a few cycles of refining it. But I'm largely depending on the AI to do the hard work of writing the code, generating the algorithms, understanding the context. And so maybe that leads to a long term decline in your own ability to understand and create the those logical structures. I've seen some similar studies more in the creative realm of using LLMs to generate creative content, subsequently has a, measurable decline in your level of creativity, in subsequent tasks. Obviously, it's a very small scale study, very limited in scope, but it is another aspect of this problem space that I think is interesting to keep in our minds as we continue to push forward in the increased capabilities of these tools.
[00:39:25] Tanner Burson:
Yeah. I think that that makes a a a ton of sense, and I get it at a simple level. Almost anything I've ever gotten better at in my life required me to do more of it, not less of it. And so if if an individual's goal is to become a better software developer, better software engineer, you know, more more productive portion of of their team and the the rest of it, the idea that you will get there solely by, delegating more of, the work to an AI tool seems counterproductive in the long term. And I I can certainly see a world I think there's another angle of, potentially the the reputation hit of you not paying enough attention to what it's generating and continuing to push out mediocre AI generated code that generates those longer review cycles that we were just talking about and requires more time from a reviewer and more feedback loops with the reviewer.
Long term, that feels like a a a negative spot to be in as as an engineer if you're not improving the quality of what you're you're working with with the team that you can end up in a position of of losing some of your credibility or some of your respect within within a team.
[00:40:37] Tobias Macey:
Absolutely. And another interesting takeaway, I believe it was from that same conversation I had with the founder of Tab 9 was, I made an offhanded remark that with the growth of AI tools for generating code, humans become the intermediate representation of software.
[00:40:57] Tanner Burson:
Yeah. It's it's interesting. I I've I was having a a conversation with with an engineer the other day, and he was describing how software to him has always been similar to writing. He didn't come from a computer science background. He came from, another set of industries. And the portions of his his experience in brain that got excited about software were the same that got excited when he was writing. And I think it was a it was a reminder to be of something that I've always believed, but maybe not been able to put as succinctly as as came out of that conversation. But we write code for people, not for computers.
I know it seems counterintuitive because the code runs on computers and it it does what it does. But if we were only writing code for computers, we would all be writing assembly because, ultimately, that's much closer to what the computer cares about. It's what is closer to what's going to be executed. We could get down to computer microcode, whatever we wanted to to kind of bottom out to here, but we don't predominantly write, in the lowest level, of of tooling that our our computers can operate in. We work in many, many, many higher levels of abstraction, and we do that because it's easier for people, not because it's easier for computers.
And I think it's important because the code we write is primarily read by other people, not by the computer. The computer reads something that was compiled and ran through multiple other layers of of abstraction. The the code we write is primarily read by other people. And I think that's where some of the conversation around, you know, AI code gets lost. If it was purely about efficiency, well, you just automate your compiler. Make make it just emit more code faster, like, come up with even more higher level abstracted languages. And I'm sure some of that will will happen, with with some of the AI tools. But most of the code today is written for people, and I think it's important to remember that, and it comes up as we talk about code reviews in those pieces as well that that is the more common consumer of our code.
[00:42:53] Tobias Macey:
Yeah. Ultimately, nobody cares about software. Nobody's paying for software. They're paying for a solution to a problem, and it's just that software happens to be a means of, producing that solution.
[00:43:05] Tanner Burson:
Absolutely. And I think that problem solving aspect is ultimately why we do what we do. If we weren't solving real problems with it, we probably wouldn't have careers doing this. Some of us might do it for fun, but most of this is really about solving real problems, and it it happens. The code that turns into software is the way we we do a lot of it. But losing sight of that is certainly a a bad outcome if that's where if that's where some of this leads.
[00:43:34] Tobias Macey:
And in your experience of working in this space, guiding your teams, educating yourself on the ecosystem, what are some of the most interesting or innovative or unexpected ways that you've seen AI used in the development process?
[00:43:48] Tanner Burson:
I I think some of the things I have started to see that are really interesting involve less the generation of more code, but more of the feedback cycle after the release of the code and how that can fold back into the code. There are a couple of companies, whose names escape me right this second who are working on things like taking an intermediate representation of your system and mapping it back to your observability data. So being able to look for performance regressions, bugs, errors, exceptions, things like that out of your observability data and map that back to a runtime understanding of the system and ultimately back to where in the code those those things are represented. So not not the simplistic, like, exception was raised on line x sort of thing, but looking at more complex regressions and challenges and trying to map those back through the system, to the source.
I think some of those are incredibly interesting, and long term have huge benefit to how how we're able to evolve software and how we're able to to improve it over time. I think a lot of the focus today is on the the kind of 0 to 1 phase of software, which it's easy to see the gains. It's easy to see that very quickly. But it's ultimately a very small part of the software development life cycle. Writing the code is often the shortest, of of the processes involving code. And I think as these tools expand, as we start looking beyond that, that's the stuff I'm more interested in seeing where, where it goes.
[00:45:26] Tobias Macey:
And in your own personal experience of navigating this ecosystem, trying to stay apprised of the capabilities and the pitfalls, what are the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:45:42] Tanner Burson:
I think it's it's maybe not an unexpected lesson. It it's probably just one we have to keep relearning continually both as individuals and as an industry, and we're always drawn to the shiny new thing, and those can be really exciting. But often, the the cycle of experimentation fiddling with it and then ultimately getting to adoption isn't a net positive. I I've I've often joked that, you you can find a developer who's procrastinating because they're changing their color scheme today. And I think fiddling with dev tools is a common way of of kind of procrastinating on other other challenges.
And I think there's a lot of a lot of opportunity, in cases in which that has happened with some of the the AI tools. And so I think that's a lesson I continue to, to relearn that some of those things may not be net productive today, tomorrow, or next week, and figuring out how to balance that, I think, is is still a challenge.
[00:46:41] Tobias Macey:
To make it a pithy remark, AI is just a bigger yak to shave.
[00:46:46] Tanner Burson:
Sure. Yeah. I I'm I'm I'm good there. Yes. There are always more yaks that need shaving, and AI just may make it faster to find new yaks to shave.
[00:46:55] Tobias Macey:
Or or it lets us build a more elaborate bike shed. Yes. Absolutely.
[00:47:00] Tanner Burson:
Or invent new colors that we could paint it even. Sure. Ones we've never thought of.
[00:47:04] Tobias Macey:
And I I think we've already touched on this a bit, but is there anything more that you have to say about when AI is
[00:47:13] Tanner Burson:
the wrong choice for a developer? I think probably the only additional thing I I I would add is, you know, we've talked a lot about code review. I think one of the things that that leads to is you have to understand the thing that's being built to be able to review it. And so I think one of the areas where AI dev tools may not be the right choice is if you don't understand the underlying problem or the tools that are being used. It makes it incredibly hard to evaluate how, how well the the AI generated code will behave, how it will impact it. There's a a quote. I think it I I will mix up exactly which one it was. I think it was in the original k and r, c programming book. So I don't remember whether it was Kernaghan or Ritchie, who had said that we all agree that debugging is inherently harder than writing code.
And so if we write code as clever as we possibly can, then by definition, we are not clever enough to debug it. And I think a lot of the AI generated code fits the same way. If I'm not capable of writing the code that the AI generated, I'm not capable of of evaluating it and debugging and maintaining it later. And so I think if that is where you've you you are at with the code that AI is generating, that is probably not the right set of things to be focused on.
[00:48:28] Tobias Macey:
Absolutely. And as you continue to invest and keep an eye on this space and the forward trajectory that we're on, what are some of the projections that you have for the near to medium term impact on the developer experience as a result of generative AI?
[00:48:47] Tanner Burson:
To to to stick with our pithy comments, I expect every dev tool will cram AI in in some way. It's we're we're not going to see less AI dev tools anytime soon. I think every every tool, every ecosystem is going to try and find some way to get to get what's in there. I think if you bundle up a lot of what that means for for most developers and and look at a lot of the conversation we've had today, I think it's going to be increasingly important that developers understand what's important in their workflow, what's important in the problems that they're solving, and where AI can be valuable in it because they're not going to have less opportunity to fit AI into their workflow. It's going to expand, and people are going to have to be increasingly discerning about the the right places and the right times to use that.
[00:49:36] Tobias Macey:
Are there any other aspects of the use of generative AI in development, the impact that it's having on developers and the ecosystem, or any of your experience in that space that we didn't discuss yet that you'd like to cover before we close out the show?
[00:49:51] Tanner Burson:
I think I think we've I think we've hit on hit on the most of it. I think well, I you opened with talking about AI maximalists and skeptics, and I I am certainly typically more on the the skeptical end of, of the spectrum certainly with with the state of the the technology today. I I am still optimistic that there are useful things in here. I think there are real challenges to continue to figure out in in cost. I I expect cost for these tools will rise over time, in the midterm for most most users, they're being heavily subsidized today by the the third party vendors or Facebook open sourcing large portions of it. By all accounts, the amount of compute required to do what they're doing isn't decreasing. It's increasing, that will inherently re lead to, to rising cost. And so I think figuring out how to balance cost to benefit, in in real terms of cost, in real dollars for a lot of these is gonna be increasingly challenging and important, and likely results in a few of these tools, kind of fading away due to just pure economics in the next few years.
[00:50:59] Tobias Macey:
Alright. Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest gaps in tooling technology or human training for AI systems today.
[00:51:17] Tanner Burson:
I think there's there's lots. The human training is a particularly interesting one. We talked about how fast these things are evolving, how much of a black box that they are, particularly in the 3rd party API side. And and I think that's a real challenge for for people who have other things to focus on to try and stay, stay up to speed on that. I think figuring out how to to improve people's baseline knowledge of these is going to be important, but I think we're probably still a few years before most engineers have a better understanding of the fundamentals of these tools and and how they work.
[00:51:53] Tobias Macey:
Well, thank you very much for taking the time today to join me and share your thoughts and experience of trying to navigate this landscape and help your teams do the same. It's definitely a very interesting problem area, one that we're all trying to figure our way through right now. So definitely, appreciate you sharing your thoughts on it, and I hope enjoy the rest of your day. Yeah. You as well. It's great talking with you today too.
[00:52:23] Tobias Macey:
Thank you for listening. And don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest in modern data management and podcasts. In it, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machine learning podcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Hello, and welcome to the AI Engineering podcast, your guide to the fast moving world of building scalable and maintainable AI systems. Your host is Tobias Macy, and today I'm interviewing Tanner Berson about the impact of generative AI on software developers. So, Tanner, can you start by introducing yourself?
[00:00:28] Tanner Burson:
Yeah. I am VP of engineering at Prismatic. We're a b to b SaaS company focused on productized integrations. So we, have a product and platform that allows our customers to integrate integrations into their product, making everything from onboarding
[00:00:47] Tobias Macey:
to data manipulation for their customers easier. And do you remember how you first got started working in the ML and AI space?
[00:00:54] Tanner Burson:
Yeah. So I I wouldn't say I am primarily, in the machine learning or or AI space for sure. I've been an engineering leader of varying sorts for for more than a decade, and I've often found myself managing or leading initiatives that involve data engineering practices and and those sorts of things ranging from, you know, classic just we have a pile of data and we need to distill it down to something smaller and usable to some more complex predictive modeling of sales or inventory data or problems like that. Also have done some work leading some kind of speculative initiatives on the kind of strategic value of AI within, within our products and within within the developer base. Digging now into the topic at hand, can you start by describing the types of roles and work that you consider encompassed by the term developers so that we can scope this conversation? Yeah. It's a it's a great starting point. I think it's always been a blurry line, and I think that line has only gotten blurrier over time. I I think things like, you know, this discussion or an AI tooling and developers starts to to even make that line harder to to discern. But, typically, when I'm talking about developers, I'm largely talking about people whose primary job is writing code to create software, which seems maybe broad or maybe narrow depending on on your viewpoint, but that's that's the starting point I usually work from.
[00:02:19] Tobias Macey:
Before we get too deep into the topic too, you mentioned that you've been an engineering leader for a long time. You're currently the VP of engineering for Prismatic. I'm wondering, given your role at Prismatic and the fact that it is a company that is also focused on enabling other engineers and teams to manage their software flows, what type of visibility that gives you into the effects that AI is having on developers and their work and some of the insights that that has illuminated in in the work that you're doing? Yeah. I think it's certainly, one of the things I think is very interesting,
[00:02:54] Tanner Burson:
about this product and company at this particular time and place. I mean, at a at a starting point, we are a software company. So first and foremost, like, we build software. So how this works with our teams, how we deal with this stuff day to day is certainly certainly something we we have to deal with. But as you noted, we work a lot with other developers and product leaders at other companies, and we see the things that they are building both for themselves and for their customers. And this gives us an interesting view into where, where they're spending their time, where their focus is, and where that's going.
Insights at this point are are hard to gauge. I think we're still seeing, certainly a lot of activity around AI integrations and how to get the right kinds of data into third party AI platforms and those things. But I I wouldn't say there's yet been a real concrete trend of this is exactly what everybody is doing other than everyone is doing a lot of experimentation. I don't think we've seen anybody coming in with a really strong viewpoint that they have exactly the right answer and exactly the right the right process yet. But a lot of our customers are are experimenting with AI and how it fits into their products and how it fits into their workflows. And so we see a lot of different AI vendors being used for integrations and a lot of different kinds of data and data flows going through there. But I don't think there's yet been a pattern other than the pattern of lots of experimentation.
[00:04:12] Tobias Macey:
One of the overall narratives that has been going back and forth over the past definitely the past year, maybe the past 2 to 3, depending on how you want to frame it, is the tension between the AI maximalists and the AI skeptics where you have some famous announcements from, I think it was the CTO at AWS saying that AI is going to put all the engineers out of a job where we don't need any more engineers. We're just gonna let AI do all the work. And then you have some very prominent and senior engineers who are saying, I've tried to use AI to do my work, and it's garbage, and I have to correct it. And it actually takes me more time to use it than if I just did it myself. And I'm interested in, at the 30,000 foot view, what you see as some of that long term impact on the job prospects for software developers and some of the ways that you see them being impacted at the career level from these generative AI capabilities.
[00:05:09] Tanner Burson:
Yeah. I think this is probably the more interesting, angle to take on the the kind of AI maximalist versus AI skeptic, angle. I think the AWS view, is an incredibly self serving one to attempt to go sell more AI product. I think it's interesting just in the last couple of weeks, Google had I don't remember who it was at Google, but it was a relatively high up, engineering leader had said, you know, with their in house custom trained AI tooling for for their coding, they had reached 25% of their code being developed via AI. My experience tends to be that if you're hitting 25%, then you've hit the least interesting boilerplate level of your software. So I I think that's an interesting data point that Google isn't saying this has taken over large portions of what they're doing, and and that's that's interesting. I think when you start to put the angle of what does this mean for people's job prospects going forward and where does this where does this where does this really lead? I start with a simple premise that I don't believe there will be less software tomorrow than there is today. I think we are only going to create more software. I also think we will still need more software engineers to create and manage all of the software that's that's being created. So I don't expect the the kind of AWS view there of, all software engineers will be replaced by AI. Maybe the line of growth changes, and we've seen that in the past. I think if you were to look at the growth rate of hiring in in software over time, it it's certainly not just up into the right forever. I'm sure we will see some some changes there, but I I don't expect we're gonna see a a a massive retraction in what's there. I do think today is a really hard day to be a new software engineering or comp sci graduate out looking for a job. And I I I think there's going to be a lot of soul searching and struggling at a lot of companies to figure out what the right balance of more junior engineering talent in their organizations will be. That said, I think it's incredibly important that we continue to to hire and build the next generation of of software engineers. If you run from the AWS premise, then at some point, we just run out of software engineers entirely, because they will all eventually age out of, age out of the the career in the workforce. So I think we need to continue to to build the, the next generation of software engineers and figure out how to make them as effective as today's engineers are.
[00:07:24] Tobias Macey:
There is an interesting semantic debate to be had as well between the ways that we say software engineer versus coder and the ways that we think about the impact across that divide where, to some approximation, if you say I am a coder, then that just means that you're maybe more of a junior level person. You just write a bunch of 4 loops and you do exactly what you're told versus a software engineer where you're maybe thinking more at the systems level trying to translate the product requirements and requests into the technical architecture and how that is going to actually be implemented in code, where if you want to use that kind of Boolean approximation, then maybe software engineering becomes more valuable because the AIs are not to the point where they can actually translate user requirements into a systems architecture and understand at a broad reaching level what that means across the various different systems modules.
Whereas if it's just, I want to write a for loop, that is very much where the AIs are capable. They are gaining a little bit more broad reaching, you know, multi file mutation capabilities, but we're not really there yet. And I don't think that maybe we'll get to a point in the next few years where we can say, I want to have an application that does x, y, and z, is able to scale from 0 to infinity, etcetera, etcetera, and it will do all of your systems design, DevOps, etcetera, but we're a ways away from that. And to your point too about the prospects of junior engineers, I think there was an interesting conversation I had with one of the founders of Tap 9 where his impression was that it maybe actually enables junior engineers a bit more and allows you to scale your software engineers better because the AI becomes the pair programmer and encapsulates more of the knowledge of that senior engineer so that the senior engineer is able to support a larger number of more junior engineers in their work. Yeah. That that's an interesting view I haven't heard.
[00:09:27] Tanner Burson:
I think if I were a junior engineer today, that's certainly an angle I would I would argue for that I I can be more productive, and and more capable than I could have been with without these tools. I think when you talk to more senior engineers, they often describe some of the current generation of AI dev tooling as like pair programming with the junior engineer. You you have to assume that it doesn't have all of the context all of the time that it may be missing pieces, that are important. It may be misremembering key details or inventing things that don't yet exist. So I think it's interesting which side of that fence you sit on. You view the tool, as providing the other side of the value. Junior engineers believe it, it helps them behave more senior, but senior engineers see it as more of a substitute for a junior engineer.
And I think there's probably pieces of both of those stories that are very true today.
[00:10:21] Tobias Macey:
Digging now into some of the actual workflow impact and the ways that AI is impacting the day to day of engineers as they conduct their work in the present. There are a lot of different utilities, maybe the most broadly known one being GitHub's Copilot and various approximations of that. And I'm wondering what you see as the general categories of tools that are most impactful, both positive or negative, and then maybe that are most useful in that development cycle.
[00:10:55] Tanner Burson:
I think it's I think it's interesting because I don't I don't think we have the tools that will end up defining this category yet. I think the the current batch of, you know, copilot tools, which everybody seems to have branded something as copilot, today, are fine. The they they serve, you know, they they serve a purpose, but they're not particularly exciting or awe inspiring. I the things where it will get interesting is probably years from now as we're able to get more of your product data into the AI system and closer to real time. Those tools are going to get really interesting. Today, as far as the tools that most people are adopting, it's copilot tools, it's cogenerators, it it's things of that nature.
And I honestly haven't seen a huge a huge positive impact from most of those tools. The the ones I have seen and the ones I've seen people adopting help with the easy work to go back to the kind of Google 25% example of if you need to make a go type or class that will map some JSON data. I'm sure it it's going to generate that way faster than you could type those characters out, but that's that's not particularly exciting or or challenging work. And in a lot of cases, I have seen it consume more of people's time and energy than it has saved. I think code reviews become even more important in a world where a team is more reliant on AI generated code.
You can no longer assume that the person writing the code has context beyond the lines of code that are presented because many of the AI tools aren't able to gather enough context to to look at the the scope and scale of things. Things like performance issues, scale issues, are really hard for these tools to understand and to reason about inside of a more mature code base today. So code review has to has to really step up and make sure that the problem being presented is being solved not just at a line by line level, but as it fits into the broader system.
[00:13:10] Tobias Macey:
The other interesting application that is starting to become a little bit more available is the use of AI for an initial pass of code reviews or pull request reviews. I know GitHub, I think, recently released something along those lines. Tab 9 recently released something to that effect. And I I haven't gotten a lot of personal experience with that yet, but that's another one of those things where it seems like it could cut both ways where maybe it saves time because it gives a good first pass of, oh, I didn't think about that. Maybe I need to adjust this here. But it could also just end up being a waste of time if it doesn't have enough understanding about the reasoning behind certain design choices of the code. And so maybe you're just going to be fighting against it because it says, oh, you shouldn't do this. You should do it this way. And you say, well, I I know why I did it this way, so just leave me alone.
[00:14:08] Tanner Burson:
Yeah. Absolutely agree. I think I would probably be more excited about tools like that if I were on a very small team, 2 or 3 developers working on a brand new product, maybe something like a mobile app where the the scope of interaction is more more knowable and more contained, and just having the the sense that there is something else helping kind of review and push us forward. I I think there's opportunity in places like that for tools like that to be more valuable, but I tend to agree that in a larger team and in a larger product, they're as likely today to add, to add noise in additional cycles as they are to really, streamline your workflow in any meaningful way.
[00:14:54] Tobias Macey:
Another interesting tool that I have seen, but, again, not experimented with extensively, is something called Plandex, where what it offers is a way for you to add multiple different files in a repo into its context and then, say, you know, create a plan of what you want to do across those files, and then it will generate that plan and give you a way to refine it, which for things like broad reaching refactorings, like, I want to move this module from this location to this location and make sure that all of the references to it are updated appropriately, or I want to rename this class or change the signature and do that a little bit more broad reaching refactoring where maybe I need to refactor it across both the core library and all of the different applications that are consuming it. But, again, there there is that cycle of you don't know how accurate it's going to be, and so there has to be a lot of human supervision, which maybe it speeds things up, but I think it takes a little while to get to that point of level of comfort and understanding of how the tool operates to be able to feel secure in allowing it to make those changes.
[00:16:07] Tanner Burson:
I think it's one of the things I find most challenging with a lot of the AI tools today. And I think that tool sounds very interesting, and I it's not one I'm I'm familiar with. I could actually imagine a lot of use cases for things like Terraform where there's just a lot of manual manipulation of, of text as you make relatively small changes across across things. But your your point about workflow and being comfortable with it, I think, is a really key thing that I don't think has really been nailed yet in a lot of the AI dev tools. Dev tools are such a personal choice for everyone to the point that we have the kind of infamous, you know, the editor wars of Vim versus emacs that have been going for 40 years or, you know, things like that where people are very attached to the tools and the the choices that they've made and what tools they've selected and how they're configured and how they work.
And people get very attached to the workflows and behaviors that those tools allow them to to create. And I don't think the AI tools have yet reached that level for most people. I I think there is still a bit of a discomfort and still a bit of a distance between them and a lot of these tools. I think there are some folks who have spent enough time and effort to to feel like they fully adopted them and understand how to fit them in. But most of the engineers I've met and talked with are still still evaluating how it fits into their kind of preferred and ideal ideal workflows.
[00:17:38] Tobias Macey:
And that point of comfort and understanding, I think, is really a big piece of why people get very attached to their specific tool chains and workflows because they really grow to understand it at a deep level. And LLMs, to a large extent, are still generally a black box of you put something in, you get something out. You maybe have a vague understanding of what the mapping between input and output is, but you're not guaranteed to get the same output every time. And I'm wondering what you see as the necessity of building understanding of LLM internals, how they're built, how they operate, and how that maps to the level of comfort in letting the tool have more free rein in your workflow and do larger and larger pieces of work for you.
[00:18:32] Tanner Burson:
Yeah. I think most engineers want to understand their tools. And so I think a lot of folks are interested in in figuring this out, or they're uninterested in figuring it out, and they're less interested in the tool because of it. I think there is a ton of opportunity to for for folks to understand these tools better and to be more comfortable with where their their strengths and weaknesses are. I think the unfortunate nature is the the level of complexity and depth of these is something that most folks struggle to want to get their head around, to act as a dev tool.
It's the level of complexity of learning how core operating system fundamentals work and core, processor fundamentals and things at the the very lowest levels of the stack for most folks in terms of the complexity scale. And that's a lot to wanna take on to refactor your class better, to to generate better boilerplate to adapt, what you're doing a little better. And so I don't think a lot of, engineers and I I would put myself in that camp of not having spent as much time as I could going deep in that. I think there's another challenge there though, which is most people aren't using an LLM directly. They're using the open a AI API or they're using GitHub's wrapper around it or they're using, Amazon queue. I assume somebody actually uses it.
And you don't it's even more of a black box than an LLM is on its own. You know, you don't know fully what the system prompts are that are going into those, just from a prompt perspective. You don't know what additional adjustments they're making to the input or the output that's coming through. You You don't know how many times it's rejecting, output before selecting one that it decides is, is sufficient given some other set of constraints. And so there's a ton of of hidden important details, and I think engineers hate not knowing what's going on behind that curtain for something they think is very important. And it's unlikely anytime soon we're going to have a lot of visibility into that into that API layer.
[00:20:44] Tobias Macey:
I think that point on open API is also an interesting thing to bring up in that from the bits of exploration and tinkering that I've done personally, it seems that a substantial portion of the ecosystem of developer tooling with some sort of AI as the driving force has started to it was at least initiated around that monoculture of presuming that the OpenAI API was the thing that you were interacting with. And there has been a substantial amount of effort in the development and production and release of these more open models, more open ways to execute those models, so things like Ollama, the Hugging Face transformers, etcetera.
But most of those tools are still presuming that the Open AI API is the thing that you're talking to. And so all of those other providers have implemented that API contract at some layer as a facade, which I do think still adds that extra bit of friction of maybe I don't want to use OpenAI. I want to use the model that's running on my laptop, but now I have to figure out what are the assumptions that this tool has made about the OpenAI API and how it's interacting with it for me to be able to translate that to the facade and get it to work happily. And I'm curious what you have seen from your own experience of working with your team and your customers about how they're maybe navigating that friction.
[00:22:20] Tanner Burson:
I I think they're navigating it as best they can, which is the way everybody is doing it. I think you're right that there there is not a a standard for interfacing with with a lot of these models and engines at this point, But I do think there's become a bit of a de facto standard of of the open API or open AI, API interface. I guess, on one level that gives you some standard to to aim at, but I agree that it it is probably long term limiting the the other capabilities and and other choices that folks have. I think we've seen we've certainly seen customers implementing against OpenAI directly, and we've certainly seen a fair number of other, APIs that are are mimicking that as best they can, including in the kind of local tooling space of of trying to create proxies and wrappers around those to, to make them behave the same.
But, yeah, I think it'll be interesting to see in the next few years how that API surface changes and whether any of the other, either local tools, like the Ollama, set of tools or third parties, whether it's Anthropic or or Google, managed to build an interface that is more compelling and interesting and pushes that pushes that forward.
[00:23:36] Tobias Macey:
Another piece of this too is that there are a lot of off the shelf tools that are focused on different areas of development. They make different promises about how they're going to influence and accelerate the way that you work. There are also all of the component parts to be able to put together your own tools in the form of things like langchain, mama index, Haystack, where you can build your own AI stack from whole cloth and make it work the way that you want it to, which, again, requires a lot of exploration and experimentation. But I imagine that once an engineer has gone through that effort of building something that suits their workflow, does exactly what they want in the way that they want it, then maybe it will actually accelerate their work more than if they're trying to use one of these off the shelf utilities.
And I'm wondering if you have seen any movement in your experience of working with your team and your customers of people who are going down that path of investing in tool development where an AI is a component of that workflow, but they are making it more bespoke than what are being produced as a generally available and generally applicable toolset?
[00:24:54] Tanner Burson:
It's a really interesting question. I I can't say I know none of the teams I have, I have directly worked with have invested the time in building their own AI based dev tooling. I've seen some small experimentation in that direction, but it didn't it didn't result in something that was that was ultimately usable. You also hit on something that I think is interesting right now in particular, and talk about, you know, an individual developer doing this to to improve their workflow. And I think that's probably happening in places that we're just not seeing because what what choices individuals make on their tooling and their dev is constant, and it it's a constant shifting landscape.
But that's a large investment for 1 individual. And I'll be interested to see whether that repeats. Like, does this get to a place where an individual developer is able to easily build these things for themselves? Like, we've seen teams attempt to do this, but those tend to be larger companies where they have, you know, existing internal developer experience teams or internal platform teams who can make this part of their kind of internal productized platform. But I haven't seen that as much at our size and even a lot of our customers' size with that being a major, a major focus yet. Most of those AI initiatives are more focused on their product. And so if they're building out internal AI infrastructure, it's about how that can serve their product, not necessarily their internal developer tooling.
[00:26:27] Tobias Macey:
I think too that one of the reasons that that hasn't become as widespread is because we're still very early in this process. All of the tooling and libraries are still very much in flux. There are constant shifts in the capabilities of the models themselves, and I'm interested to see how the landscape develops of maturing some of those building blocks so that they're a little bit easier to snap together like Legos rather than just, here's a chunk of wood, here's a lathe, good luck.
[00:27:02] Tanner Burson:
Yeah. And at at this point, it doesn't even feel like, we're we're being handed lathes necessarily. You you get a a chunk of wood and a sharp piece of metal and and told to figure out how to spin it up to speed so that you can get it, whittled down. So there's there's obviously tons of opportunity there to continue to, continue to build the to your point, the Legos, the building blocks that will allow this to be a more composable set of tools than it is today. They're relatively monolithic tools today. I one of the other challenges will also just be training. I think the the training process, the amount of data, the time, the compute that's there, is still very expensive on all of the dimensions of, you know, time and compute both are are are expensive even for something as as small as a code base.
And so I think we're gonna need to continue to find better ways to to manage those processes and those tools to get to a point that you can have something more individualized than what the, kind of prepackaged third party solutions look like right now.
[00:28:05] Tobias Macey:
Yeah. I I do think that that is another interesting aspect of this is that if you do want to do anything fairly sophisticated with these AI models, you need a pretty decent set of hardware to be able to execute it on even if you're just doing it on your local laptop. You typically still need some sort of GPU with a decent amount of VRAM available to it, and not everybody has that. There there has been movement more towards building some of these smaller models. The llama 3.2, I know, has a fairly small model, which is designed to be able to be run more on CPU hardware. I think the Quen 2.5 Coder model just launched.
I just read about that this morning as we're recording this, and that one has a 0,500,000,000 parameter model. So that's fairly small as far as things go. So I do think that some of these smaller models with better tooling around the context creation will maybe improve the ability for individuals to do this on their own. But, yeah, the the hardware requirement is definitely a pretty significant constraint still.
[00:29:12] Tanner Burson:
It'll be interesting to see how the smaller models behave in the use cases that people care about. The the impression thus far has been that the the bigger the model, the more tokens, it it can accept, the better the outcome will be for the the software use cases. It'll be interesting to see as some of these, lighter models come together, whether they're able to be, tuned in ways that are just as valuable on the software side even given the the the smaller scope.
[00:29:47] Tobias Macey:
Another interesting and useful angle on this overall problem is so for individual developers, they have to do their own decisions about what is useful. Is this actually improving my productivity, or is it wasting my time? But speaking as an engineering leader, there is also an equal importance on understanding the landscape, understanding the impacts, both positive and negative. And I'm wondering how you are evaluating and offering guidance to your team on the ways that they are incorporating and using these AI powered tools.
[00:30:27] Tanner Burson:
It's a really interesting challenge at this point, because a a given team will have the full breadth of AI enthusiasm within it. You will have the the engineer at one end who's, you know, built a machine at home so that they can go train and build and manage their own models, and they're they're super deep in it. And at the other end, you have, folks who just wanna use them with their 2 color syntax highlight theme, and that's it. And they they're not interested in in going through there. And so there's certainly a balancing act in figuring out how to make sure that everybody, regardless of their enthusiasm for it, is is able to to be productive.
In my case, you know, the the kind of baseline is we enable the copilot style tools, whether it's through GitHub or or other services that we subscribe to. Let our engineers use those in whatever ways they they think are fit. Some enable them for everything. Some have them turned off. Some, have them in kind of chat only mode where they can poke it when they need something, but, otherwise, it stays out of their way. But just like any other tool that we look at adopting across the team, we have to talk about it as a team and talk about what's working, and what's not. Decide whether there's alternatives we wanna investigate. We have, you know, security and compliance policies and what kinds of tools we can adopt.
That absolutely applies here. So we have to be conscious of what the what the requirements and and capabilities of those tools are, how they fit into our broader, compliance standpoint. And then lastly, you know, the the other bit is, you know, cost of these tools is not, is not free. And so we've gotta balance the the the net gain in terms of productivity against what the the actual cost is for the tools, and that that's something, you know, we have to continue to to pay attention to. On a, you know, personal level from the leadership side, I read about a ton of this stuff. I talk to a lot of my peers in other organizations, talk about what what they're seeing, what their teams have found effective, what's not, and use that as part of the discussion and and things we have, have internally. But like anything else, there's no one true answer. It's a it's a constant, back and forth and constant, evaluation of whether there's, new ways we can we can be more effective.
[00:32:54] Tobias Macey:
Absolutely. And on that note of compliance, there is also the subtext of risk where that can take the form of risk of, corporate secrets and private code getting leaked depending on the tool chain that you're using and the ways that they manage the collection and storage of data that is submitted to them. And then there is also a risk in the scope of you are relying on and trusting this AI to give you code that is going to go into production and serve your customers, and maybe it introduces security vulnerabilities or other types of bugs. And so it actually ends up being a net negative to your overall team productivity.
And I'm curious how you're thinking about that management and mitigation of risk in the use of these AI tools.
[00:33:46] Tanner Burson:
Yeah. You you nailed the 2 risks that are are probably highest on my mind. One is just the security of, of the service since, as we mentioned, it's hard for a lot of, individuals and teams to build these things in house and to have fully, in house trained infrastructure. You're relying on a third party who will have access to, you know, some amount of your code, potentially data, and in some cases, may involve running, you know, software they provide, whether it's just a CLI wrapper or something similar, on on developer laptops, which are relatively high secure environments in terms of what they're capable of and what they see. So that's that's certainly the top of my list of risks. So much like any other vendor we would adopt, we have to go through a security and compliance review, make sure that their their policies and standards meet ours. And then, you know, give it the the eye test of, like, do we believe that this is a a vendor that is taking us seriously and that we would want to partner with for for these sorts of use cases?
That's, far away, the the first and foremost, the thing we we have to pay attention to. I think the harder one, though, is the the kind of code quality, risk that you were describing of the the introduction of subtle bugs or or challenges. I'm hitting back on something we talked about earlier. I guess this is where code review has to really play a huge role and has to evolve. I think teams have to look at code differently. They have to pay attention to the code differently. I think in in teams that may be adopting the some of the AI tooling more completely than, certainly I am, but, many many are those code reviews may need to happen earlier in the process.
I certainly treat the code generated by an AI tool as code I must immediately review, and believe before it goes into even a branch or a code review for another engineer to see. And so I think we have to pay attention to that at at a much deeper level. I know with my team, we have mandatory code reviews for everything that we look at. Most teams I've worked with in the past have had had similar behaviors. And I think with the addition of more, third party, added code via AI or or other means, those code reviews become even more important in ensuring that the team is trained, prepared, and understand how to really dig into concerns like security, like performance, like scale, and identify those issues and resolve them before they, before they're pushed live is is even more important.
[00:36:19] Tobias Macey:
Another interesting aspect of the code review problem too is that because you are able to generate a lot more code faster, whether or not it's good is subjective. But because of the fact that you can say, I want you to write this whole class and then implement usage of that class and then refactor this thing over here, your pull request has the potential to explode by maybe an order of magnitude in terms of the number of changes that are included in one change set, which increases the burden on the human doing the review because there is a lot more code for them to have to look through. So I I can see that as being a a net negative to team productivity because while you're producing more code and maybe that is a benefit to productivity, it means that the person doing the review has to take a much longer time to look through it all, especially if they know or suspect that a substantial portion of that was generated by an LLM.
[00:37:20] Tanner Burson:
Yeah. There's certainly potential for very negative feedback loops, in in this sort of process, and it's something it's something to pay attention to. And it's it's something I I also think highlights part of why I am not convinced of the the AWS argument that, you know, software will be replaced by AI. The ability to go understand this, the ability to construct the change sets in a way that is meaningful and manageable and and representative of what's going on is currently, and at least based on the the the current set of tools, a human only set of interactions.
I think there's still a long, long road before the the the kind of code generated AI is able to to incorporate that sort of concern and that sort of, attention to to the code that it's creating and the the process of getting that code created.
[00:38:16] Tobias Macey:
I I think another interesting aspect of risk as well is the personal risk to the engineers who start to use the AI as a crutch of I know that this is directionally what I want. Go ahead and generate it for me, and then maybe I'll go through a few cycles of refining it. But I'm largely depending on the AI to do the hard work of writing the code, generating the algorithms, understanding the context. And so maybe that leads to a long term decline in your own ability to understand and create the those logical structures. I've seen some similar studies more in the creative realm of using LLMs to generate creative content, subsequently has a, measurable decline in your level of creativity, in subsequent tasks. Obviously, it's a very small scale study, very limited in scope, but it is another aspect of this problem space that I think is interesting to keep in our minds as we continue to push forward in the increased capabilities of these tools.
[00:39:25] Tanner Burson:
Yeah. I think that that makes a a a ton of sense, and I get it at a simple level. Almost anything I've ever gotten better at in my life required me to do more of it, not less of it. And so if if an individual's goal is to become a better software developer, better software engineer, you know, more more productive portion of of their team and the the rest of it, the idea that you will get there solely by, delegating more of, the work to an AI tool seems counterproductive in the long term. And I I can certainly see a world I think there's another angle of, potentially the the reputation hit of you not paying enough attention to what it's generating and continuing to push out mediocre AI generated code that generates those longer review cycles that we were just talking about and requires more time from a reviewer and more feedback loops with the reviewer.
Long term, that feels like a a a negative spot to be in as as an engineer if you're not improving the quality of what you're you're working with with the team that you can end up in a position of of losing some of your credibility or some of your respect within within a team.
[00:40:37] Tobias Macey:
Absolutely. And another interesting takeaway, I believe it was from that same conversation I had with the founder of Tab 9 was, I made an offhanded remark that with the growth of AI tools for generating code, humans become the intermediate representation of software.
[00:40:57] Tanner Burson:
Yeah. It's it's interesting. I I've I was having a a conversation with with an engineer the other day, and he was describing how software to him has always been similar to writing. He didn't come from a computer science background. He came from, another set of industries. And the portions of his his experience in brain that got excited about software were the same that got excited when he was writing. And I think it was a it was a reminder to be of something that I've always believed, but maybe not been able to put as succinctly as as came out of that conversation. But we write code for people, not for computers.
I know it seems counterintuitive because the code runs on computers and it it does what it does. But if we were only writing code for computers, we would all be writing assembly because, ultimately, that's much closer to what the computer cares about. It's what is closer to what's going to be executed. We could get down to computer microcode, whatever we wanted to to kind of bottom out to here, but we don't predominantly write, in the lowest level, of of tooling that our our computers can operate in. We work in many, many, many higher levels of abstraction, and we do that because it's easier for people, not because it's easier for computers.
And I think it's important because the code we write is primarily read by other people, not by the computer. The computer reads something that was compiled and ran through multiple other layers of of abstraction. The the code we write is primarily read by other people. And I think that's where some of the conversation around, you know, AI code gets lost. If it was purely about efficiency, well, you just automate your compiler. Make make it just emit more code faster, like, come up with even more higher level abstracted languages. And I'm sure some of that will will happen, with with some of the AI tools. But most of the code today is written for people, and I think it's important to remember that, and it comes up as we talk about code reviews in those pieces as well that that is the more common consumer of our code.
[00:42:53] Tobias Macey:
Yeah. Ultimately, nobody cares about software. Nobody's paying for software. They're paying for a solution to a problem, and it's just that software happens to be a means of, producing that solution.
[00:43:05] Tanner Burson:
Absolutely. And I think that problem solving aspect is ultimately why we do what we do. If we weren't solving real problems with it, we probably wouldn't have careers doing this. Some of us might do it for fun, but most of this is really about solving real problems, and it it happens. The code that turns into software is the way we we do a lot of it. But losing sight of that is certainly a a bad outcome if that's where if that's where some of this leads.
[00:43:34] Tobias Macey:
And in your experience of working in this space, guiding your teams, educating yourself on the ecosystem, what are some of the most interesting or innovative or unexpected ways that you've seen AI used in the development process?
[00:43:48] Tanner Burson:
I I think some of the things I have started to see that are really interesting involve less the generation of more code, but more of the feedback cycle after the release of the code and how that can fold back into the code. There are a couple of companies, whose names escape me right this second who are working on things like taking an intermediate representation of your system and mapping it back to your observability data. So being able to look for performance regressions, bugs, errors, exceptions, things like that out of your observability data and map that back to a runtime understanding of the system and ultimately back to where in the code those those things are represented. So not not the simplistic, like, exception was raised on line x sort of thing, but looking at more complex regressions and challenges and trying to map those back through the system, to the source.
I think some of those are incredibly interesting, and long term have huge benefit to how how we're able to evolve software and how we're able to to improve it over time. I think a lot of the focus today is on the the kind of 0 to 1 phase of software, which it's easy to see the gains. It's easy to see that very quickly. But it's ultimately a very small part of the software development life cycle. Writing the code is often the shortest, of of the processes involving code. And I think as these tools expand, as we start looking beyond that, that's the stuff I'm more interested in seeing where, where it goes.
[00:45:26] Tobias Macey:
And in your own personal experience of navigating this ecosystem, trying to stay apprised of the capabilities and the pitfalls, what are the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:45:42] Tanner Burson:
I think it's it's maybe not an unexpected lesson. It it's probably just one we have to keep relearning continually both as individuals and as an industry, and we're always drawn to the shiny new thing, and those can be really exciting. But often, the the cycle of experimentation fiddling with it and then ultimately getting to adoption isn't a net positive. I I've I've often joked that, you you can find a developer who's procrastinating because they're changing their color scheme today. And I think fiddling with dev tools is a common way of of kind of procrastinating on other other challenges.
And I think there's a lot of a lot of opportunity, in cases in which that has happened with some of the the AI tools. And so I think that's a lesson I continue to, to relearn that some of those things may not be net productive today, tomorrow, or next week, and figuring out how to balance that, I think, is is still a challenge.
[00:46:41] Tobias Macey:
To make it a pithy remark, AI is just a bigger yak to shave.
[00:46:46] Tanner Burson:
Sure. Yeah. I I'm I'm I'm good there. Yes. There are always more yaks that need shaving, and AI just may make it faster to find new yaks to shave.
[00:46:55] Tobias Macey:
Or or it lets us build a more elaborate bike shed. Yes. Absolutely.
[00:47:00] Tanner Burson:
Or invent new colors that we could paint it even. Sure. Ones we've never thought of.
[00:47:04] Tobias Macey:
And I I think we've already touched on this a bit, but is there anything more that you have to say about when AI is
[00:47:13] Tanner Burson:
the wrong choice for a developer? I think probably the only additional thing I I I would add is, you know, we've talked a lot about code review. I think one of the things that that leads to is you have to understand the thing that's being built to be able to review it. And so I think one of the areas where AI dev tools may not be the right choice is if you don't understand the underlying problem or the tools that are being used. It makes it incredibly hard to evaluate how, how well the the AI generated code will behave, how it will impact it. There's a a quote. I think it I I will mix up exactly which one it was. I think it was in the original k and r, c programming book. So I don't remember whether it was Kernaghan or Ritchie, who had said that we all agree that debugging is inherently harder than writing code.
And so if we write code as clever as we possibly can, then by definition, we are not clever enough to debug it. And I think a lot of the AI generated code fits the same way. If I'm not capable of writing the code that the AI generated, I'm not capable of of evaluating it and debugging and maintaining it later. And so I think if that is where you've you you are at with the code that AI is generating, that is probably not the right set of things to be focused on.
[00:48:28] Tobias Macey:
Absolutely. And as you continue to invest and keep an eye on this space and the forward trajectory that we're on, what are some of the projections that you have for the near to medium term impact on the developer experience as a result of generative AI?
[00:48:47] Tanner Burson:
To to to stick with our pithy comments, I expect every dev tool will cram AI in in some way. It's we're we're not going to see less AI dev tools anytime soon. I think every every tool, every ecosystem is going to try and find some way to get to get what's in there. I think if you bundle up a lot of what that means for for most developers and and look at a lot of the conversation we've had today, I think it's going to be increasingly important that developers understand what's important in their workflow, what's important in the problems that they're solving, and where AI can be valuable in it because they're not going to have less opportunity to fit AI into their workflow. It's going to expand, and people are going to have to be increasingly discerning about the the right places and the right times to use that.
[00:49:36] Tobias Macey:
Are there any other aspects of the use of generative AI in development, the impact that it's having on developers and the ecosystem, or any of your experience in that space that we didn't discuss yet that you'd like to cover before we close out the show?
[00:49:51] Tanner Burson:
I think I think we've I think we've hit on hit on the most of it. I think well, I you opened with talking about AI maximalists and skeptics, and I I am certainly typically more on the the skeptical end of, of the spectrum certainly with with the state of the the technology today. I I am still optimistic that there are useful things in here. I think there are real challenges to continue to figure out in in cost. I I expect cost for these tools will rise over time, in the midterm for most most users, they're being heavily subsidized today by the the third party vendors or Facebook open sourcing large portions of it. By all accounts, the amount of compute required to do what they're doing isn't decreasing. It's increasing, that will inherently re lead to, to rising cost. And so I think figuring out how to balance cost to benefit, in in real terms of cost, in real dollars for a lot of these is gonna be increasingly challenging and important, and likely results in a few of these tools, kind of fading away due to just pure economics in the next few years.
[00:50:59] Tobias Macey:
Alright. Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest gaps in tooling technology or human training for AI systems today.
[00:51:17] Tanner Burson:
I think there's there's lots. The human training is a particularly interesting one. We talked about how fast these things are evolving, how much of a black box that they are, particularly in the 3rd party API side. And and I think that's a real challenge for for people who have other things to focus on to try and stay, stay up to speed on that. I think figuring out how to to improve people's baseline knowledge of these is going to be important, but I think we're probably still a few years before most engineers have a better understanding of the fundamentals of these tools and and how they work.
[00:51:53] Tobias Macey:
Well, thank you very much for taking the time today to join me and share your thoughts and experience of trying to navigate this landscape and help your teams do the same. It's definitely a very interesting problem area, one that we're all trying to figure our way through right now. So definitely, appreciate you sharing your thoughts on it, and I hope enjoy the rest of your day. Yeah. You as well. It's great talking with you today too.
[00:52:23] Tobias Macey:
Thank you for listening. And don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest in modern data management and podcasts. In it, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machine learning podcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction to AI Engineering Podcast
Guest Introduction: Tanner Berson
Defining Developer Roles in AI
AI's Impact on Software Development
AI Maximalists vs. AI Skeptics
Job Prospects in the Age of AI
Software Engineer vs. Coder
AI Tools in Development Workflows
AI in Code Reviews
Challenges with AI Dev Tools
Understanding LLMs and AI Tools
Navigating AI Tooling Friction
Building Bespoke AI Tools
Hardware Constraints in AI Development
Evaluating AI Tools as an Engineering Leader
Managing Risks with AI Tools
Code Review Challenges with AI
Personal Risks of Relying on AI
Innovative Uses of AI in Development
Lessons Learned in AI Tooling
Future Projections for AI in Development