Successfully coding with AI in large enterprises: Centralized rules, workflows for tech debt, & more

howiaipodcast HtzkfjEH-GU Watch on YouTube Published July 20, 2025
Scored
Duration
44:56
Views
14,097
Likes
300

Scores

Composite
0.54
Freshness
0.00
Quality
0.86
Relevance
1.00
8,619 words Language: en Auto-generated

Vibe coding is not an acceptable enterprise development strategy. I love it. I can do a hundred commits a week by myself on my side project on my startup. But when you're working on a codebase in a platform like Launch Darkly that powers trillions and trillions of experiences every day, you can't take the same strategies and tactics that a vibe coder could take. >> One of the things that I realized is what's good for humans is also good for LLM. So I really started with how do we make sure that the repo is well set up for humans to know how to work in it. So we have front-end organization, we have accessibility, we have a JS style guide. So all is this very detailed documentation that we've put into the repo itself rather than have it in other places. And this way LM can access it, humans can access it, etc. >> I think all the engineers out there are like crossing their fingers and hoping that there's one rules protocol to rule them all that shows up. And I think what you've shown is you can just create that yourself and then that makes it much more scalable. Welcome back to how I AI. I'm Claire Vo, product leader and AI obsessive here on a mission to help you build better with these new tools. Today we have a great episode for anybody trying to deploy AI agents in a real engineering team with a real codebase, not just vibe coding. We have Zach Davis, director of engineering at Launch Darkley, who's going to show us how he sets up centralized rules and docs for all his AI agents, uses AI to burn down tech debt and keep his hiring bar high. Let's get to it. This episode is brought to you by WorkOS. AI has already changed how we work. Tools are helping teams write better code, analyze customer data, and even handle support tickets automatically. But there's a catch. These tools only work well when they have deep access to company systems. Your co-pilot needs to see your entire codebase. Your chatbot needs to search across internal docs. And for enterprise buyers, that raises serious security concerns. That's why these apps face intense IT scrutiny from day one. To pass, they need secure authentication, access controls, audit logs, the whole suite of enterprise features. Building all that from scratch, it's a massive lift. That's where work OS comes in. Work OS gives you drop-in APIs for enterprise features so your app can become enterprise ready and scale up market faster. Think of it like Stripe for enterprise features. OpenAI, Perplexity, and Cursor are already using work OS to move faster and meet enterprise demands. Join them and hundreds of other industry leaders at workos.com. Start building today. Zach, I'm so excited to have you here because I feel like I maybe turned you into an AI fiend at this point. Before the show, we were talking about how many tools you're now using. So, before we dive in, can you just tell us a quick list, maybe not so quick list, of all the AI tools the technology team at Launch Darkly are now using? >> Yeah, absolutely. Let's see. So on the design side uh we're again we're exploring a bunch of things. So there's going to be a bunch of tools. So uh lovable vzero uh Figma make we're using uh on the product side obviously we're using chatp. Um and then on the engineering side for code heavy cursor users heavy devon users um we're also using now like cursors background agent. I personally use windsurf because I like windsurf but most of the rest of the work does not. uh we are using we're triing augment we are looking into cloud cloud code we're doing all the things we're also looking into PR review so we use copilot for code review and we use uh cursor for code review as well >> okay and I feel like 18 months ago we were using maybe a little GitHub co-pilot but not not much >> yeah not much more and one of the things that I really liked that you and I did together and you were you were a champ in coming along this journey is we really decided that in order for AI to be really effectively adopted by a team like Launch Darkley's engineering organization which is over a 100 people we really needed to put some concerted effort behind it and put a person in charge and lucky you you drew the uh either short or long straw however however that works you know what do you think about teams approaching this kind of engineeringwide transformation and what kind of organizational and cultural things you need to do to make it possible >> I I do think having a person who's kind of I don't know if in charge is the right word, but whose responsibility it is to drive that kind of change. And I think that having someone who's close to the code helps a lot because you don't really know what's working and what's not working unless you're in the code at least on some basis. And so that can be a manager, that can be a director, someone, but it has to be someone who's actually trying these things I think matters a lot. And uh yeah, you I think you were looking for someone to take that role and I was I think skeptical right of how well things were working really well for you over on your side job on chat PRD. But when I tried to do the same things in our codebase, I was struggling. And so I really came at it from a standpoint of I want to understand what works and what doesn't and and either be able to push back on you and say, "Hey, you know, like it's great over there, but on on this, you know, larger codebase that's not working or be able to actually drive change and now I'm at the like I'm I'm on board. Let's let's drive change." Uh I think that matters a lot. >> Yeah. And one of the things that I think is really important for our listeners, especially ones that are at growth stage or larger companies, is vibe coding is not an acceptable enterprise development strategy. Like I love it, right? I can do 100 commits a week by myself on my side project on my startup and you know I I can recover from quality issues. you know the maintainability of the code the issue right now that's not my big business issue but when you're you know working on a codebase in a platform like Launch Darkly is that powers trillions and trillions of experiences every day you can't take the same strategies and tactics that a vibe coder could take and bring those into not just an individual developer's workflow but an entire team's workflow. And so what have you kind of discovered as you try to figure out how to make these tools work for a larger team? >> With smaller teams, you have more flexibility in terms of how you approach these things. With larger teams, you have more enablement and and stuff like that. We're kind of in the messy middle and it's I I found it more difficult to uh sort of like operationalize that, make everyone successful. and and what I found was everyone was on their own journey uh to try to be successful with AI and that just doesn't scale very well, right? So, um yeah, you really need to come up with a system in order to to make everyone more successful, right? What I want is when those skeptical engineers jump in and they try cursor, they try whatever for the first time or they try it for the first time in a while. I want them to be successful so that they get that aha moment. And if you just leave them on their own, then then you're not going to get there. >> Totally. And we had I think you and I had mutually had the experience of developers having their first experience actually be negative. whether it was with cursor or with Devon, it was like, see, I I knew this was never going to work. And here's my first pass proof that it didn't work. And so, you really did a lot of what I appreciate about you is you did a lot of technical work to make sure people were successful. We'll definitely talk a little bit more about some of the kind of culture and operations piece, but I actually want to get dive into what you did in the codebase to make it easier to work with cursor and Devon. So, can you walk us through some of those things that you did? Yeah, absolutely. So, in my IDE here, uh you can see one of the things that I realized is what's good for humans is also good for LLM. And so, I really started with how do we make sure that the repo is well set up for humans to know how to work in it. And so, we have this docs directory and I pulled a bunch of stuff from Confluence, from Google Docs, from other places in the repo. And I I put it all in here, right? So we have front end organization, we have accessibility, we have a JS style guide. So all is this very detailed documentation that we've put into the repo itself rather than have it in other places. And this way LMS can access it, humans can access it, etc. In addition, we had these we had cursor rules before. We had a cloud MD file and I wanted to consolidate that. And so uh instead of a cursor rules, I have this agents rules. And the idea is to kind of centralize all of this knowledge in one place. And so you can see here uh I have something like TypeScript essentials which has a really uh kind of like quick like the the quick hits of what's really important. And then it uh it also links off to like the comprehensive docs and says, "Hey, go if you want to find out more, you know, go look at the JS style guide." And so then our cursor rules actually just point to that, right? So with our cursor rules say, hey, if you want TypeScript guidelines, go find this file in agents. Um, and then uh I talked about augment earlier. We were telling augment I set this up yesterday and I had I asked the augment uh agent to just create this. I pointed it at the cursor rules and I uh pointed it at our agents rules and I said can you just create this file and so it uh it did the same thing and this way we don't have to duplicate everything across multiple tools or tool files and it's much easier to get stuff working well by default and the whole idea with this is again I want people to be successful uh out of the gate and having this kind of centralized place having all this documentation in the repo just makes it way easier for for tools to be successful by default. >> Yeah. One thing that I hear a lot is people are really frustrated with the tool specific rules. They're like, why do I have a claude MD? Why do I have a cursor rules? Why do I have these GitHub rules? Especially if you're experimenting with the number of tools that you're trying. Each tool has isolated their rule set in an individual file structure. And I think all the engineers out there are like crossing their fingers and hoping that there's like one rules protocol to rule them all that shows up. And I think what you've shown is you can just create that yourself. Create a directory in your repo. Put a consolidated set of rules. Make your sub rules for each of those tools point to those rules. So say when you're looking, you know, when you're working on front end, reference these rules and then that makes it much more scalable. The other thing that I want to call out for you is I think what you said at the beginning you're using I don't know a dozen tools probably like three to five idees across the engineering organization probably within any individual engineer you're testing like I want cursor today and devon tomorrow and if you don't have your rules set up for all those tools then you're starting from scratch every time and so I really like this idea of of of rule setup and then consolid validation. I'm curious, do you feel like the rules have improved the quality of the outputs significantly? >> Yeah. Yeah, absolutely. So, here I'm looking at at our feature flagging rules. And it's interesting because we are a feature flagging uh we have a lot of feature flagging code in the codebase. And one thing we noticed was that some of the models, some of the tools would get confused about whether we were asking it to create feature flags on the like launch darkly product or whether we were actually trying to get it to do stuff in the code. And so there's there's a bunch of stuff that I did to just be really specific about how to make how to be successful when creating feature flags. I want you to return a link, that kind of stuff. And it really has it has made a difference yesterday. Literally uh one of our product managers was doing a task with Devon uh and was able to tell it to put a flag put the feature behind a flag and uh Devon went and used the MCP and and hit the flag and and everything worked. So I have another question about rules because launch darkly's giant monor repo 10 10 10 years old or something like that it's it's got a lot of code in it front end back end tests all this stuff what do you think if you had to give some advice to peer engineering leaders who are approaching the same problem what are the must-have rules from your point of view in the code you know I saw like a lot of front-end stuff but what you know what are the quick hits of what you think should belong in a kind of cursor rules or a rule set >> I would say the best tip is ask the agents, right, to to get you started. And so Devon actually has a great wiki um that had for each repo that Devon works on, it creates a wiki and it has a ton of really good information in it. And so I actually started with Devon and I said, "Hey, can you like this is what I'm trying to do. Can you create basically the human readable docs for this?" And so Devon did a path and and created a bunch of docs, suggested some structure. We went back and forth and I kind of tweaked things and then I took that output and I went through it with kind of like a fine tooth comb because I think it matters, right? It matters to get those details right. And then once I had the human readable docs, I went to cursor and I said, "Hey, like can you can you take these docs and can you take your existing cursor rules and can you turn those into agents file?" And so it was a combination where you can kind of lean on the agents a little bit to help you get unstuck and get started and then also use your knowledge of of what's important in the repo. And um the other thing is that I was looking at where are people getting stuck. I knew that people on the front end would struggle with getting like testing basically writing unit tests. um it would write just tests and we use V test and stuff like that and so putting in specific rules to make sure where people get stuck uh we have rules to to help the agents be more successful. >> Would you mind pulling up Devon and actually giving an example of generating rule and I have an idea for you. >> Sure. >> Which is like rules around generating data visualizations since we've done so much and just you know see what it comes up with. >> Yeah. Here here's an example of of both asking Devon wiki a question and then also using Devon to create a to create a rule. So we can say uh to the Devon wiki we can say um what are the libraries used for charting on the front end? >> I'm curious while this is loading how long did it take you to set up Devon's environment? It's something that you know everything's easier with a little Vibe Coded Greenfield app except for setting up Devon's environment. It's just as hard. I'm curious what your experience has been configuring Devon to work in a large repo. >> Yeah, I would say to get up and running with Devon, uh I I got started pretty quickly and we have kind of we have a separate flow for front end and and back end, we have a concept of sort of like front-end only mode, which proxies against another backend. And so I was able to get Devon's machine up and running pretty quickly in just front-end only mode. And then I was able to take on front-end tasks using Devon. One of our other engineering managers actually came in and and saved the day on the back end to get the full like running our whole end to end uh up and running with Devon. And that took him I think uh a little bit more time than it took me. But the nice thing is you can do that sort of incremental, you know, you do what works. You don't have to have Devon running your full app locally in order to get value out of it. And so it's it's just about kind of like doing it piece by piece. And again, if it's hard to get Devon up and running, it's probably hard for your human developers to get up and running. So there's always uh incentive to to make those things better. >> Yeah, I will give just because you you said it, you said, you know, Devon's environment doesn't have to be running for you to get value. My number one Devon prompting tick trick is don't run this locally. Just give me the code and I'll test it for for you. So sometimes I I bypass that process entirely. Okay. So you asked this question of Devon wiki what are libraries used for charting on the front end and it gave um answer recharts um some other things and so what would you one is this information pretty accurate and and two what would you do with it >> yes it is accurate we are using multiple libraries and so that was one of the things I was curious about is is we've brought in several libraries and and we're kind of trying to figure out how to consolidate and so it picked out that we're using recharts we're using visex um it lists e-charts as a secondary library I don't know if that's strictly true, but generally this seems very correct. And so I I like to use Devon wiki to just sort of ask basic questions about the repo, make sure I I understand what we're doing. But if I actually wanted to create a rule, so you can't take action from from Devon wiki. So I what I want to do is I want to create a new document, a human readable, you know, human centered document in in my our docfront end about how to use charting libraries. And then I want to also add a rule to agent rules. Um, so I'm just going to give this to Devon and I am gonna see how it goes. Uh, so Devon's going to spin up a new session here. >> And one thing I want to call out for folks listening on the prompt is you specifically said you wanted to create a markdown document. Markdown is every engineering agent's favorite um, file type. So that's a good way just to give a little bit of structure to your code. It also tends to pretty print and and be human readable and easy easy to view in in GitHub and all that stuff. >> And so >> what you're doing here is just asking Devon to make those docs for you. And this is one of my favorite use cases of Devon. I think you know this um it's my favorite Devon hack which is I have a GitHub action on every PR that writes docs for the PR and adds to a change log programmatically with Devon. And I found that it's a very good technical writer. You know, sometimes the code is okay, but the technical writing is is very clear and very good. >> Yeah, I think that's exactly right. The the Devon wiki is very good. It knows a lot about your codebase. It has this very explicit way of learning and and understanding your codebase. And so it is very good about kind of describing that back and and as you said, doing it in a in a solid technical writing way. >> Yeah. Uh, and then one of the other things that I want to call out for people that are maybe listening and not watching is we are chitchatting because we're waiting for Devon to spin up a virtual machine. So for those that don't understand kind of how Devon works, it actually spins up a virtual environment that reflects a development environment. It's going to open it up. It's going to read your codebase. It's going to do all this stuff. And so, you know, it takes a minute to actually boot into an environment a little different than running something like Cursor locally. >> Yeah, that's exactly right. where where these other tools are just using whatever you have locally. Devon is running its own machine which has a lot of upside. Um it can run a browser and and see a browser. It can do a lot of things that don't come out of the box with these other tools. Uh with the downside that it takes a little time depending on on your repo. It takes a little bit of time to actually set that machine up and and get it running. But like you said before, if your machine is slow to cold boot for Devon, it's probably slow to set up locally for an engineer. So again, align incentives on getting your repo to work well for both your agent co-workers as well as your human your human colleagues. That is that is my favorite thing is all the things that have been hard for humans forever and we have just kind of swallowed it and said well that's the way this works become even more important uh today with with these LLM tools to to solve and and improve. >> Well I think it becomes more important to solve improve and then I also think it becomes easier to solve and improve them. If I said, you know, two or three or four years ago, Zack, go document everything in the repo. Highquality human readable docs, you just go do it by yourself. It would take forever to generate highquality docs that, you know, really reference our code and understand the nits and details. And I think the fact that even you can spin up docs so quickly is so transformational to how you can, and I know we'll see this in a little bit, like burn down tech debt, make your engineers happier. And so I do think, you know, a lot of engineers have this skepticism that adoption of AI tools is really about moving faster, shoving more junk in the code, like just getting feature bloat. And I actually do think for mature engineering organizations, it is also an opportunity if you approach it correctly to take care of some of the things that you have just hated forever e in either how you run your software, how your team operates or the code itself. And so that's one of the advantages I think people underestimate in larger organizations because they blur the line in their mind of AI assisted engineering with vibe coding, which is not what we're talking about right now. >> No, not at all. and and technical debt is is my favorite use case uh for AI to to supercharge uh like a medium-sized organization. >> Okay. So, what we're seeing here is Devon's looking through your repository, accessing its knowledge. Actually, I'll take a pause here. >> Yep. >> Have you set up the knowledge in Devon explicitly, which is um for folks that don't know, little snippets of kind of rule. It's almost like Devon's rules in some ways. Little snippets of knowledge and rules. Have you set those explicitly or have you simply accepted and approved the ones that Devon suggests? >> Yeah. So, as as you mentioned, one of the things that I really liked about Devon, especially as we were first getting started, is it builds up this knowledge on its own in some ways. So, as you're interacting with Devon, it will make suggestions for additions to its knowledge so that it gets sort of like quote unquote smarter every time. Um where some of these other tools have memories uh because Devon's starting a fresh session every time across different users it it has this centralized knowledge um kind of like repository and so it's been a mix we've sort of let it build up over time and pe various people have accepted knowledge I added some knowledge very early on I will intentionally add stuff when I run into problems but then again when I moved all this documentation to the repo and I was trying to centralize everything. Devon's knowledge now primarily points to that same aagents directory because I don't want to have the duplication. I want it to work for all the tools. I don't want just Devon to be to be effective. I want all tools to be effective. >> Got it. So, you've really taken all your tools, whether locally hosted in the repo or cloud hosted like Devon, and just made this agents folder the source of truth. >> That is that is exactly right. How I AI is now on Lenny's list with my personal selection of the best AI engineering courses on Maven. You can spend months thinking and playing with AI before really integrating it into your workflow or shipping an actual AI feature. If you want to start building, then these hands-on Maven courses are for you. Learn directly from Aishwaria Naresh Reganti, MIT instructor and AI scientist at AWS or Sander Schuloff who has authored research with OpenAI, HuggingFace, and Stamford. To pivot into an AI role or successfully lead your company's next AI initiative, visit maven.com/lenny to enroll now. Use code Lenny's for $100 off. That's mavvn.com/lenny to get ahead in the AI era and start building. So Devon has a plan now. One of the things I like about Devon is it it gives this confidence now like how how confident is it in the task at hand? Uh which is nice because sometimes it's not confident and you it's it's better not to proceed. This is this is something that as we mentioned Devon should be really good at and so I feel good about its ability to execute this but it will give you sort of an overview. If I thought if I read through this and I didn't like like what it was doing it's going to run prettier on the markdown files which actually I think is a good idea but if I didn't think that was a good idea I could update its plan while it's deciding what to do next. >> Yeah. The the other thing that I enjoy about Devon is nine times out of ten it's confident confidence gets higher as it goes. So it always starts like medium confidence but I have to investigate and then it's like high confidence I know what to do but occasionally it fails me deeply and I have bullied it so much that starts to progressively lose lose confidence and then it's like low confidence. I haven't been successful so far. So I find the confidence assessment pretty pretty accurate. >> Yeah. Okay. So now it is creating it. It's created multiple markdown files. So it's created a charting libraries.mmd file. Um and we can actually if we want we can jump over. So there's a shell. There's also the code. So I can actually go look at what it's creating while it's creating it. Um so charting libraries guideline. It's creating that in our agents frontend. It looks like I think it also created one in docs. So this is the human readable version um which I'm not going to go through in detail but looks it has examples you know has the different libraries. I like all of that. And then in the agents rules it's sort of a consolidated you know must must use this when I like seeing that. I would go through here and really make sure it's accurate in what we want. And then it's a little long I think you know for for me I want to keep the the agents rules pretty concise so you're not including the context and and just so it's not too much and I would also want to make sure that it links out to the full documentation is another trick that I like to do so that a tool can decide to pull in that additional context if it wants. Well, one little trick that I learned from another How I AI guest is that if you notice, cursor reads long files in chunks of 200 lines. And so his goal was to keep these files under 200 lines so that it's not chunking the content. And so I saw yours is just like a little bit over 200. So one of the things you might add to your rules for rules is try to keep your rules files under 200 lines, for example. Now again, I don't I don't know if that's actually helpful or true, but it is a tip somebody gave me, so I'm passing it along with no personal context. >> Yeah. No, I mean that that that's actually again that's a good tip for humans, just like it's a good tip for loss. And you you said something that I think is really interesting, which is I actually I have a readme. I have a human readme about the rules so that people understand how to create new rules, but I should actually probably have something geared towards LLMs so that when LMS are adding new rules, they're doing a better job of it. >> Yeah. Okay. So, I see this. It looks great. So, it's created a human readable docs um in your docs folder, a um rules for your LLMs. You're going to review this. You're going to do a PR and merge those docs into to the repo, maybe take a look at them, edit them, and you've used, you know, Devon wiki, Devon agent, um and then it's spun up this this codebase to write write those docs. And so, I think this is a really great flow. I think people are going to learn a lot from this. You know, one of the things you said earlier was that tech debt is your favorite use case for these AI tools. I love to hear it because this is how I try to pitch senior engineers and senior engineering leaders like you to really adopt these tools when they're really skeptical. Can you walk us through how you actually approach burning down tech debt using these tools where it's made it easier maybe? >> Yeah, absolutely. So, here we are in cursor and uh I'm going to show you that same agents directory. There's So, I showed you agents rules before. We also have agents/migrations. And so, this has a couple files in it. Um it has a CSS module conversion file which I created to to help us um convert CSS files to to modules, CSS modules. And then it also has I just added this one the other day, which is the one that I would like to show. And so, uh, what it is is basically it's a combination of instructions for agents and a a checklist, basically like a task list of of what to burn down. And so, the problem that I was running into is that our front-end unit tests, when you run yarn test, it just there's so much noise in the console that it's really distracting. there's some actual legitimate problems in there that are just kind of being warned about and ignored and I wanted to pay that down, but it's it's one of those things that is annoying, but it's not quite annoying enough for someone to own it. And also, it's such a big problem. It's really hard for one person to just kind of like take that and pay that down. And so >> well and and I'll say imagine as an engineer you go to your product counterpart and you're like hey I just want to spend like a week or two just making our test logs just a little less noisy so my life's just a tiny bit e like it's such a hard pitch to make for work like this. It's super important and like the pitch can work on the right leader but again like this is the kind of thing that's hard to justify in a fastmoving org. >> Yeah. So, I'm actually I think what I'm gonna do is I'm gonna ask I will talk I will talk through kind of how this works and in in the background I will have cursor uh take the next tab. So, uh I'm actually going to say so it has this context. said knows I'm in this file and I'm just going to say can you take the next tier of tasks I can see here there's a tier one tier two there's three files I think that's reasonable and fix them and I'm just going to say click go and we're going to see what happens um okay so in in the meantime what I did to actually produce this is I ran yarn test and I I piped the output to a log file which I'm I'm not like a super techy tech person and so I actually asked cursor how to do that effectively Uh, and then I had a log file and I pipe I gave that to Claude uh to Claude code and I asked Claude to basically create this file and uh so what it did is it went through all it it actually had trouble with how big that file was but it was smart about uh working around that and so it found out that we have something like 1,200 extra lines in a in a test run that we don't need to be there that we don't really want there and and then it quantified this or it sort of grouped this into different types of warnings and then which files are are the worst offenders. And so then once we had this file, I said great, like go can you go fix like the worst of the tier one worst offenders. And so it actually went and has has done that successfully that's been merged in, reviewed and merged in. And then I can do stuff like this where I just say like for any the thing that I like about this is you can just give this to any agent now. I can slack Devon and say at Devon can you pick up the next task in the front end test noise cleanup. Uh I can do it here in cursor uh and watch it go. I could give it to cursor background agent. It sort of like makes it easy to pick these things up as as individual tasks uh and and make progress on them. What I like about this approach as well is it's very there's a lot of parallels to how you would approach something like this with an engineering team of of human partners, right? You're going to take a problem, somebody's going to go investigate it and identify priority tasks to do. You're going to put those tasks in some sort of task tracking system. And um you and I both know all of our beloved uh task tracking and project management systems. And I am starting to see cursor markdown files become the new task tracking system. So I'm seeing this this trend of these check checkmarked files in cursor just being the source of truth for progress on initiatives. So you've created basically a list of epics and tasks here if that's what we call it. And they're prioritized by how severe they are. And then what I like about how you're approaching this, instead of saying like rip through all 1300 noisy lines, you're saying prioritize them. Do them one by one. And then what I'm presuming you're doing is the work happens. Whatever agent you decide to do the next task, close it out, you review the PR, you make sure any changes work, you merge it, it gets marked off. The other thing I want to call out is while you are probably running this yourself, you could probably also get more people on the team to be aware that this test exists and just say, "Hey, if you have a few minutes and you're able to review a next set of noisy tests, like tell Devon to pluck off one and do the do the code review for me and it's all set up and it's ready to go." So, you know, I think this multiplayer aspect is very important when how how you approach um some of these tools when you're working in a larger larger team. >> Yeah. I just today I had a PR up to to fix a few stray errors on this on this one file and one of the one of the people from the team that works primarily on that, I included them in the review and he said, "Hey, if there's any more uh stuff like this, feel free to kind of like throw it over the wall to us. you don't have to be the one that does all of this. He didn't know that I was just using uh you know K or Cloud or whoever. And so now I can actually just point him at this file. I said, "Hey, you know, take a look at this file and if there's any ones you want to pick off in your ownership area, then then just go ahead." Uh and and you're exactly right doing that, democratizing that. This is great for, again, I I'm saying the same things, but it's great for bots. It's also great for humans. humans can come in here and understand this and and work against it. >> Yeah. And if you're feeling like, you know, crafting your farmtotable code and you want to pluck one of these off yourself and you want to fix it, you can approach it the same way, right? Just open the file, mark the thing as done, do the PR. And so I really do think it's important that folks think of kind of these tools as an extension of the team. And the more the tools can operate the way the team would operate and the more the team can operate in the same way the tools can operate then then we can kind of all collaborate together and be be much more efficient. So I think this is a great super great example. Um I'm not going to make all of us watch cursor go through tests and lint errors because I have lost enough of my life to doing that but I think it was a really really great example of tech. And then just to ask the question, you know, what's the end payoff for front-end developers? Like the actual issues bubble up in your tests and and these tests get less noisy. >> Yeah, I I think one is it's easier to find stuff when stuff is going wrong. Two is I think it said that the biggest problem was actually accessibility warnings. So that's that's like a real problem that that exists. But when there's 1,200 lines of of that and a lot of that's coming from like the same component if it's tested a bunch of times will will spam the logs and but being able to sort of surface the actual signal through the noise I think is is one of the key benefits. >> Okay. And then for our last workflow, I know that um Zach, you're going to impress everybody and everybody's going to think you are just an AI enabled, you know, cuttingedge engineering leader who only works with his army of bot friends, but you're actually hiring at Launch Darkly. We expand the team. Um you're always bringing in great talent, and you've actually used AI um to solve another problem, which is making sure that you're doing a great job hiring. So, do you mind spending a couple minutes on what that little workflow looks like? >> Yeah, absolutely. So, I am, you know, me, I'm I'm a little bit of a conflict avoidant person. I don't love giving people tough feedback. You know, it's something I've I've grown to do over my career, but especially when it's someone I don't have a strong relationship with. It's not a direct report. I don't love just dropping in being like, "Hey, this isn't great." Uh, but we were trying to improve make our our hiring more consistent and I created a rubric for all of all of the panels that we have. So, there was really clear guidelines about how to score a candidate. But the other piece of it was we needed people to follow those guidelines and I wanted to be able to give people feedback about whether basically I wanted to raise the bar of the actual scorecards that we were creating. So, I created this custom GPT. I gave it the rubrics and I gave it examples of good good scorecards, bad scorecards, um, and gave it as much kind of like you helped me write the prompt. So, thank you very much for that. And so, what I'm going to do is I'm actually just going to paste in uh a scorecard. Uh, and so this is, you know, a scorecard that we got. And I'm going to click click go. And it's going to do a few things. one, the rating that it's giving me is is the rating of the scorecard itself. So, it's a little meta. Um, but I want to know basically I one of the things I did in the prompt is I said give it a rating. Is it excellent, good, fair, or poor in terms of scorecard? And then I want you to list out strengths uh and and potential improvements. And so and then the last thing that I had to do which I also think you help me with so thank you very much is what I the format I wanted to give this to people in was to send it to them over Slack right like hey like thanks for doing this uh but also I had a little bit of feedback and so it actually it gives me the detailed feedback but then it also crafts a short Slack message that I can if I want just copy and paste and and send to the person who uh created the scorecard. I love this because so many managers and hiring managers can empathize with this because if you're running an interview panel, you're having everyone from your boss to your direct reports to people you've never really worked with directly interview candidates, right? You have these crossf functional interviews and while you can have all the rubrics in the world, interviewers sometimes write terrible notes and you know say you assess the wrong things or don't give you the right details or really using the rubric incorrectly and you're not sitting in every single one of those interviews to give live coaching. And so this is a really nice way to make sure that you're holding the kind of standards very high and then giving you some leverage as a manager to give your team coaching and then as they get this coaching they get better at doing the the interview feedback and then you can be more confident in in your hiring decisions. >> Yeah. And honestly, this helped me like as what in order to test this out, I was doing a bunch of interviewing and I was writing scorecards and I would paste it in and see what kind of feedback it gave me and it was giving me very good feedback and I learned very quickly the kinds of things be you know be more specific uh you know avoid avoid certain kinds of things and so it actually made me write better scorecards just through trying to create this tool for other people. Okay, Zach, you have given a master class in how engineering leaders at larger companies can really approach your integrating AI into not just their individual workflows but their team workflows. So, just to cover what we talked about, one let the team experiment like every tool just let's see what works. So, you seem pretty kind of uh generous with your experimentation mindset around what tools can use um can can bring value to the team. I think two context is king and so you're loading up your actual repository with docs and rules. Three, those rules are centralized. So you don't use agent or tool specific rules. You create a central agent repo and then point all your specific tools toward that. You use your AI tools to actually create those rules. You um use uh cursor and other tools to create plans to burn down tech debt and then have those AI tools burn down that tech debt. And then um since you have all this free time now now uh you're coaching yourself and your team to be better uh interviewers and better hirers. So so just that no big deal that's all you have to do >> few things. Yeah. >> And this is all I mean truly from a from a personal professional development perspective these skills were developed what in the last 12 months right? >> Oh I think January is when I really started like >> okay >> took on the mantle and playing with Devon and really going down this this path. So, six months and we didn't give you I mean we I think we offered but like formal L & D none of that. We just pushed you into it and said and said go. >> Yeah. >> Okay. I'm going to wrap with two quick lightning round questions and I'll get you back to all your AI assisted code. Question number one, you listed so many AI tools. Which one is your favorite or which one has been most transformational? >> Oh, that's really hard. I would say Windurf actually. Everyone was hot on cursor and I it just wasn't the UX at the time was not clicking for me and I saw a video for WinSurf and I was just like whatever I'll give it a try and I had the free trial and within an hour I think I was paying for it because I just it really clicked for me and the agent workflow just really quick clicked and I was hooked. >> Amazing. And then when AI is not listening to you, you're such a you're conflict avoidance. So, I'm actually very interested in your answer here. AI is not listening to you. Um, you need to give it harsh feedback. What are your tactics? I know you don't yell. I know you're very polite, but what do you do? >> I mean, some sometimes I lose it, but I have to I I the thing that I actually do is sometimes I just feel like it's not the right task, right? So, it depends. If I if I think it's something that AI should be good at, then I I I get a little snippy with it. Maybe I don't yell, but I'm definitely, you know, um getting getting a little annoyed. But I also think that sometimes it's okay, right? Like sometimes it's not going to work and you don't have to keep banging your head against it. And I think developing that sense of where it works and where it doesn't has been really powerful for me. And also sometimes I just like getting in there and and getting getting dirty uh getting my hands in the code. And so uh yeah, I think my my technique is actually either I do it myself or I go back and try and fix it. You know, am I providing the right context? You know, what what what is missing that it can't accomplish this effectively? >> Yeah, you're you're a very good manager. So, I think it's from those skills. All right, Zach, this has been super informative. Where can we find you and is there anything we can do to be helpful? >> I'm on LinkedIn. Um, we are hiring at Launch Darkley and also if if you are a Launch Darkly user and you have any feedback, I love user feedback so please send it send it my way. >> Amazing. Well, thank you so much, Zach. >> Thank you. >> Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review which will help others find the show. You can see all our episodes and learn more about the show at howiipod.com. See you next time.

Summary

Zach Davis from Launch Darkly shares how his engineering team successfully integrates AI tools into a large, complex codebase by implementing centralized rules, documentation, and workflows to manage tech debt and improve hiring processes.

Key Points

  • Vibe coding is not suitable for large enterprises; structured approaches are needed for team-wide AI adoption.
  • Centralizing documentation and rules in a single repository makes AI tools more effective and scalable.
  • Using AI tools like Devon, Cursor, and Copilot requires intentional setup and context to be successful.
  • AI can be used to systematically burn down tech debt by breaking down large tasks into manageable pieces.
  • AI can assist in hiring by providing feedback on interview scorecards to maintain high hiring standards.
  • Engineers should use AI to generate human-readable docs and rules, then use those to train AI agents.
  • The key to success is making AI tools work like human team members by aligning their processes with team workflows.
  • Tools like Devon can spin up virtual environments to execute tasks, but require time to set up.
  • Consolidating rules across all AI tools prevents duplication and ensures consistency.
  • Leaders should empower engineers to experiment with different AI tools and find what works best.

Key Takeaways

  • Create a centralized repository for documentation and rules to make AI tools more effective and scalable.
  • Use AI to generate human-readable docs and rules, then use those to train AI agents for consistent output.
  • Break down large tech debt tasks into smaller, prioritized pieces to make AI-assisted work manageable.
  • Leverage AI to improve hiring by providing feedback on interview scorecards to maintain high standards.
  • Treat AI tools as team members by aligning their workflows with human team processes.

Primary Category

AI Engineering

Secondary Categories

LLMs & Language Models AI Tools & Frameworks Programming & Development

Topics

AI adoption centralized rules AI agents technical debt codebase documentation engineering team enterprise development vibe coding hiring process interview feedback

Entities

people
Zach Davis Claire Vo
organizations
LaunchDarkly Atlassian WorkOS Lenny's List Maven OpenAI Perplexity Cursor Devon Windsurf
products
technologies
domain_specific
products technologies

Sentiment

0.85 (Positive)

Content Type

interview

Difficulty

intermediate

Tone

educational technical instructional professional entertaining