High growth startups: Uber and CloudKitchens with Charles-Axel Dein
Scores
at Uber. When did you join? >> I joined in 2012. At the time, yeah, the team was about 20 engineers. There was no road map, very little process to learn by doing. Essentially, >> what do you see? How is AI already changing software engineering, especially for the work that you're doing at cloud kitchens? >> We did our own study, we found it was moderately useful. Every single time we have one of those revolution and AI might be the biggest yet, don't get me wrong, but every time we have those revolution, the press and everyone speaks about replacing engineers. We're still there. What are some non-obvious recommendations to thrive inside a high growth startup? >> I talked about memorization. Using flashcards to memorize stuff, very powerful. And second, I'm not sure if it's super nonobvious, but extreme ownership. So that means >> what does it take to thrive at a fast growing company as a software engineer? Charles Axeline was engineer number 20 at Uber and saw firsthand as a company grew from 20 engineers to more than 2,000 in just 5 years. He also happened to hire me at Uber and was my first manager at the company. Today, Charles works at Cloud Kitchens and has been maintaining the wildly popular GitHub repository, professional programming, ingredient links for 15 years. In this episode, we go into what it's like to work inside a rapid growth startup like Uber during its high growth times, how to be a standout software engineer, including tips on how to lead projects efficiently, and the importances of shift stuff and lift people around you mindset. Charles's personal productivity tips, including why he uses flash cards to memorize things like architecture patterns and data science methodologies. how Charles thinks AI will change software engineering and why migrations is the best use case Cloud Kitches has found so far for AI coding tools. If you're working at a fast-paced startups or plan on working at one and are looking for tactical advice on how to do well in such an environment, this episode is for you. This podcast episode is presented by Statsig, the unified platform for flags, analytic experiments, and more. Check out the show notes to learn more about them and our other season sponsor. With that, let's jump in. So, Charles, welcome to the podcast. >> Thanks, G. I'm very happy to be here. >> It is so nice to be back. And this is this is not the the first time when we're sitting in a situation like this. This it was a bit more stressful situation. It was an interview at Uber. You were the hiring manager and I think you were the final interview and in a a series of uh interviews and that's how I joined. That's how we started to work together. >> Yeah. And actually right before you joined, I don't know if you remember that, but I called you to ask you if you could change stack. I think you were hired as a back end engineer and I asked you to do iOS because this is where we needed and you said yeah let's do it. >> It was >> Android it was a very interesting interview situation. I was doing iOS development back then. I did Windows phone before and I don't know if you know but my interview got messed up. I thought I would be doing an iOS uh interview and then they put me on a backend track and I I did backend before a few years before. So I was like all right let's do it. And apparently I did good enough on it which which was surprising. And and then I was supposed to join as a Python engineer and I didn't do Python before but obviously Python you can learn easily and I was like okay let me learn it. So I start I got a Python book you know Python 101 and then you you emailed me saying could you do Android? What what was was this typical of Uber back then to have this kind of hectic or or what what what was the reason? It was it it all felt very hectic when I joined. Yeah, it was definitely very hectic and it's interesting because then when you when you're during those hectic time like you feel like this is not normal and I would much prefer a time where it's much more quiet but then when it gets quieter you miss the hectic time. So it was definitely that hectic. It's it's not the most surprising things that happen when it comes to hiring. Uh but uh yeah a good hectic right like you're growing super fast so you have to ask people to be flexible and I think that's what people look for in startup right they want to see things that are very different where they get exposed to a lot of different things and that's definitely what you got with a a panel for a stack a background for another stack and then you get hired and you get asked on your first day I think it was close to your first day where I asked you it was yeah Android was pretty pretty hectic >> and at Uber when did you join you were very ly. >> Yeah. So I joined in 2012 thanks to the first driver operation uh who I was working with uh before and at the time yeah the team was about 20 engineers >> like engineer number 20ish. >> Yeah something like that. So definitely super interesting because there was no road map very little process. Uh so definitely discovering a lot of things as you go and uh what better way to learn right like when you are at a startup things are very unstructured and you get exposed to a lot of problem and you make a lot of things up you learn by by doing essentially and it was a truly amazing experience I think it was the same for you because you did join >> and then you joined as a software engineer and then you made your way into engineer management right >> yes that's right one year later Tuanfam was the CTO at the time asked me to step in and best best career decision ever. Um, was really a rewarding experience. And then since then, I've moved back and forth between software engineer and engineering manager. And I really appreciate doing both. Like I think I believe in in really hands-on engineering managers. Yeah. >> And so by being that early, you must have worked with Travis Kalnik, the co-founder of Uber, right? >> Yes. I actually still have some pretty good emails where we uh when we broke features. What was interesting at Uber is that we build a product that help people make a living. And so when when you when there's an incident, it's not only like a feature that is broken, it's also people who are not get were not paid on time. And um I remember this um this event where we um the payment process for driver failed and this was right before Christmas and so we got forwarded an email from a driver who couldn't purchase gifts for their kids. And I think that shows like the responsibility that you have as a software engineer when you own things in production. Like it's not only features, it's not only code, it's also potentially people's livelihood. >> I remember when when I interviewed at Uber actually you told me something similar and that it was the first time I kind of like paused of like, oh, I could actually work on something that has real world implications in in terms of people relying on it it being important. And I do remember that whenever we did an incident review, it wasn't just like I don't know how many ad dollars we lost, which again with some places that's what it is, but it was like how many people's rides could not be there and then like behind that there there's a story there's frustration. I mean I' I've had before when you know like a a driver cancels for example, I'm in a hurry or it's stuck in a traffic jam and I'm late and that itself is frustrating enough. >> Yeah. Yeah. So right now I'm working at cloud kitchens which is another startup that works in the physical world right and today we speak a lot about like AI startups and and it's evidently a very important sector but at the same time like all of those startups that actually change things in the physical world and have an impact on people's life. I think I think people should also consider joining them because um you can see the result of your work. So that's one. to the physical world is a source of never ending complexity and challenges, right? Like for instance, when it comes to delivering food, you have the optimization across space and time that is really interesting from a technical standpoint, from an algorithm standpoint, from a system standpoint, from a latency standpoint, right? Like you have a budget of time uh that is not infinite because the food needs to get delivered. So it's really interesting to get exposed to the physical world. And the other thing I would say is we talk a lot about like startups and and virtual products and social networks and their impact on people's lives. Uh that is not always positive like we talk a lot about addiction for instance and software that is used primarily to optimize processes in the physical world. You can say that it's it's fundamentally good. Right. >> Yeah. Yeah. And very different challenges. I I feel we are seeing a little bit more of a push where more people are considering and and it's a bit more visibility of of these software. But you know what one thing that is just not as I guess sexy about them is the hyperrowth is is you rarely there wasn't was was an exception uh and cloud kitchens might be another exception. So at cloud kitchens how how are things similar or different to Uber? So you're right like the the time of hypers scale startups is probably behind us. I think you you did a lot of articles on this uh the end of zero interest rate. What this means is that the model of this hypers scale startup that grows super fast and higher extremely fast is probably behind us and it's probably a good thing. Um, I think the the thing we do differently at Cloud Kitchen is we're much smaller in terms of team size and and um much more focused and um, yeah, you you're right like it's it's really interesting to see that there's a bit more humility, there's less money floating around which leads to probably better decision. I think at the time of Uber there was probably like a lot of waste in the way uh, we hired, the way we built teams, the way we we started projects. Actually, talking about how we manage projects at Uber touches on the original story of our season sponsor, Linear. The idea for Linear came about when their founders were going through hyperrowth phrases at Airbnb, Coinbase, and Uber. As you'd expect with real scale, these companies started to slow down. What used to take days started taking weeks, then sometimes even months. Not because people work less harder, but because there were a lot more moving parts that needed to be coordinated. As an example, in the early days of Uber, it took a single engineer about five days to integrate, test, and ship a new payment method to the app, Google Wallet. But years later, it took two months for three engineers on my team to ship and release Google Pay because there was so much more planning, coordination with stakeholders, working with other stakeholder teams, and the vendors themselves. As teams grow in size, product development gets hit particularly hard. Every team involved in the process uses a different set of tools and workflows. This fragmentation means there's no scalable way to answer what's been committed, what's at risk, who's actually accountable, who are we building this feature for. It's often a total mess. The conventional approach is to compensate for tooling apps with more headcount or more status meetings, but in my experience, it doesn't help much. This is why Linear exists, to give high growth teams the clarity and coordination they need without the overhead. Linear's founders build a tool they wish they had during those chaotic hyperrowth scaling phases. You can try it yourself at linear.app/pragmatic and see why teams like ramp and clay also switched over. So let's talk a little bit about that. I do think hypers scaling is is behind this because there was there was this this special time where there was so much money on the market and Uber was so good at raising money you know like they they raised billions for a physical product for every single round they valuations kept going higher and higher. We are seeing something similar right now with AI but it's not in the in the physical world. Uh well you can argue of GPUs are but I feel that's a little bit different but let's talk about what it was like to be inside cuz because I remember when I joined it was crazy and one of my first memories of how crazy it was is when I became an engineering manager I asked you um I I was an apprentice manager and and you were my my manager and I asked you how does headcount planning work and you told me it's like headcount the way headcount planning should work is we make a plan we make an ask and and we do it. He's like, "In reality, you keep hiring and when you hit your limit, you get more headcount cuz it just comes out of this like like like black like box and just more headcount comes out of it. It's always been like this." And I was like, and and you were like, "Don't worry, it doesn't make sense. Just keep hiring." >> Yeah. Until it makes sense, right? and and uh yeah, clearly there was like um probably a lack of like financial uh professionalism and and being diligent with the company's resources. But I think now as you've described right like with zero interest rate, it's probably much sound like financial principles um driving like the head complaining. Um but yeah, to to come back to uh hypers scale, it's really chaos. That's that's what it is. But good chaos, right? Like chaos where you get exposed to many problems and if you're curious, which I think is a key skill if you want to or key quality to have if you want to grow. Um then you get exposed to so many problems at the same time which means that you have to ramp up on those problems really really fast and you get to learn because the best way to learn is to try something get feedback and then keep on iterating with a continuous improvement mindset. Um so that was really fascinating but you're absolutely right like the uh hypers scale meant one on boarding per week. So for instance when you do one engineering on boarding per week you have to standardize it right like you have to structure it you have to process it and >> just in Amsterdam I mean we were a smaller office but we we had as as you say an average one or two engineers joining per week in the office and we started off as when I joined it was about 20 engineers and every every week uh we would have one or two new people and I I remember that as a result we actually we spent a lot of time improving the onboarding process process which is kind of weird to think now because a lot of companies that I talk with today they don't they don't care too much about it because they have a new joiner like every second month or something like this but but it it it led to like we we were focusing a lot we were because we're do so much onboarding obviously we did so much hiring so we had to optimize around that to like batch the days that engineers could could could be hiring we had this thing where if there was a crunch time for a project we we might not do interviews for a week and and recruitment team was was really upset about that because their their targets were were different. Can you explain like like what you saw how like what what hypers hypers scale actually meant in terms of the the dayto-day and and then how it later eased into just just what what what normal thing is because I I I feel a lot of people have not seen it. They will probably not see it. So it's good for us to like describe what we saw inside. >> Yeah. And by the way, on the hiring side, the irony is usually the team that needs the most um new hire is going to be the most impacted by interviewing essentially because you have to interview for your team. At least that's how it's organized most startup. I would you remember it, right? Like it's it's I don't want to say a neverending stream of incidents, but a lot of incidents, right? a lot of incidents where you feel like you you're covering like the immediate action items that result from the incident but you don't have time to actually fix the fundamental architectural root cause complexity or some of the things I would say yeah incidents definitely and on call was terrible >> so you have those incidents at the same time you also need to build new features and build new product and and also deliver right >> well I I I remember that we we had a lot of enthusiasm through the door. So I I still think back how we could manage with so much like our on call was terrible like we were woking up in the middle of the night like almost every night and and we just we tried to fix it but it was just patches but I remember that because a lot of new people were joining and people are very excited because it was from the outside the company was very attractive. They brought so much energy and and that energy definitely lasted for a good like four to 6 months and and we had a never- ending stream of of of people who you know just as as you were kind of getting tired someone like oh cool let's do this let's fix it and there was a mentality of of let's fix this which on one end was a lot of band-aiding thing initially later as I think our growth slowed we started to think about like okay how do we fix the system how do we turn this into a platform how do we rewrite oh and there are so many so migrations. >> Yeah. To the point a bit too many. You after 1,000 engineers or 10,000 employee, you don't need a customer, right? Like you're going to have a team that are going to create migration requests for other teams and and the company can operate in a closed uh a closed circle, right? Like you you can create one. Um yeah, the the incident is really the best way to learn about your architecture deficiencies, right? Like you you can look at the architecture on paper and you can see okay, this component is probably a problem. But the best thing really to know the bottlenecks is to see what breaks and especially in 2012 and 2013 every Friday evening I would prepare to come back home and then something would break and then it would be firefighting mode and this is how you learn that oh radius breaks after that and oh posgress break after that and oh we have this queue here that is not dimensioned correctly or we have this instance or like and at the time yeah autoscaling wasn't really a thing Um so really interesting a good way to learn essentially about your systems efficiency still to that day right that's a great way >> I'm not sure if it had to do with hypers scaling or not but we had this rule that when you are deploying a system that seems like a you know like a mediumsiz change go to the slack channel or the chat channel and let people know that you're doing it because I I think we're kind of used to this could cause issues so it's just giving a heads up that if if if you see something going there just just let us know again back then we didn't have the good that good observability in fact we we had to build our our own observability there were no vendors that we could on board I I do think that this high growth high high number of incidents hard to slow down and and we always had to keep shipping so the the trade-offs the reason we couldn't really stop and and fix things because we had I remember with planning we're like okay we have let's say 10 engineers and and we can do three projects at max and each project had such a high impact. We were talking in the the $50 to$100 million of incremental revenue for that team that if we would have stopped to fix the systems, we would have said no to bringing in that much extra revenue, which kind of didn't make sense. So, we tried to do both and and we prioritized building new stuff. >> And sometime you have also like regulatory context and other things like there there are certain projects that that you must do um and it's not really up to you to uh to make that call. Um but yeah coming back to um to the incidents and like the fact that you had to announce like deployment this is why I think the most fundamental way to improve this is to decouple the release from the deployment right first deploy your code and you get it's it's behind feature flag so nothing happens and then you turn on your feature flag just for your user so that you can test and then you can roll out slowly. This approach very simple approach works wonders in terms of providing stability because then um you get into the habit of making sure that a deploy never breaks in. You first release and test with your user and then because you're act you're the one turning on the feature flag you are presumably following the feature and monitoring it with the product manager etc. So this is a very simple strategy to ensure that um that would have been I think if we had better used it at Uber we would have saved oursel a lot of like unnecessary craziness. One thing that was really unique about Uber when it came to deploying is for high-risk environments uh like banks they would typically have different environments they would have the development environment they would then deploy to the staging environment they might have user acceptance testing and then production and these are all physical environments that need to deploy it Uber didn't really do this instead we had the concept of tenencies so we would run the same code everywhere but we would have a test tenency for example plus plus feature flags. What what what do you think about the upsides and downsides of of doing either of having the the separate stable environments especially when we're talking about the the physical world about when when you know you don't want to break things when when breaking things does have a lot of implications you know it could be upset customers or or money lost or or churn or you know like real world impact >> yeah and actually when I left Uber there was no tenencies yet we were using a dev server I don't know if you remember that time where Every engineer would have their own development environment where you would install the wall stack on one machine. Evidently the machine was oversized for what we were doing is it so we were installing the wall stack of Uber or at least most of the stack of Uber and then that's what you would use to test end to end and what's great is you could share it. You could go crazy with it but yeah I think we missed having um a pre-pro or staging environment. I think you need both, right? Like you you need a staging, you need a way to have or at least the three of them, right? Like you need a way to quickly deploy one instance of your service and then route traffic with it with tenencies. Uh you need a staging environment where you can go crazy and probably some kind of like prepro where it's much more stable and you open an incident if something is broken. The idea being the earlier you detect a problem the better because it speeds up the process, right? Like if you if you detect an incident at production stage, you're going to waste hours rolling back, maybe fixing forward if you don't have a good roll back story. So the earlier you detect the better. >> I I remember that we had an incident where someone someone on social media posted that something is broken at Uber and I remember you were upset and you were upset that we didn't catch it, that our monitoring did not catch it and that someone had to post on social media. I remember, you might not remember what you told us, like this is the ultimate failure of us needing to learn from a customer that our system is broken and instead of us like like like we should have had a system flag it and you know if we missed it that would have been on us but and this this this still stuck with me this lesson of you you were so mad not not not because of the incident but the thing but the fact that we did not have the the means we were basically flying blind and someone exposed us. I mean it was kind of a good thing but but I this stuck with me of like since then I I I remember like okay how can we make sure that we will know and we will not have to have someone tell us like okay this is broken. Yeah, frankly to this day, I don't think even on my current team, we still have to work on those on those things, right? There are three metrics uh at cloud kitchen that we always look at for any incident. Time to detection, time to mitigation and time to resolution. And the most important part for me is time to mitigation. It means how much time does it take us to bring the system back to a state where the customer doesn't see the impact. Right? resolution is goes after. And if you waste like one week of time because you did not detect the issues, there you go. This is what you need to fix. And at scale, especially at Uber scale, like being able to detect is much easier because you have such a high volume that if something breaks, you must have some kind of monitoring that will detect it. >> One thing with with hypers scaling is things are so fast. uh you keep hiring people but you're you're hiring people a lot slower than than what your growth is in terms of customers, features, all those things. Automation uh becomes really really important. You've always been a huge fan of automation. I I remember at at Yuber you kind of built your scripts to automate this or this or that. What what have you learned about automation of how it works, how it doesn't work? Maybe some things are not as obvious about it. The most recent thing I learned about this is this article I read uh about the ironies of automation which is a pretty old article um but it's fascinating and it talks about two things that I've definitely seen and I've made a lot of mistakes in this area. One sometime when we automate we replace user error with automation designer error right like we are lacking context we don't understand how the um the software or like the process is used so when we automate we actually put in the wrong automation that's one two because you cannot automate everything you usually automate the simplest things and you leave the user in charge of the most complex stuff so that's the irony here is like automation sometime fails so how what What can you do to to prevent that? The the simplest thing is observe and and get a lot of like deep understanding of what the process is. You have to do it manually, right? Like we say startups should do things that don't scale first. So do the things manually first. This will give you the business context, product context, operational context to then automate correctly. And then the second thing is is transparency, right? So many automation particularly in the developer experience world automate so much you don't see anything and then when something breaks you are left to debug. I'm sure it happened to you at Uber with some of the developer experience products. >> Yeah. And when you said to do things manually at Uber we had this really interesting thing called sanity tests. >> Sanity tests were we were in charge of our our team own payments. So everything in the Uber app there there were like 12 different ways to pay from credit cards, Apple pay, Android pay and there was a lot of like regional things like in India there was PTM and and and a bunch of others and we would just need every week we would need to make sure that the payment would work. So we we would go there we would order an Uber and like see that the payment went through and we would actually do this manually. back on what was sometimes been mocked up. But every time we we we would go through it. And when we did it long enough, we're like this is and we needed to do it manually for a few reasons. One of them it was just hard to automate because we had to talk with the real payment providers and a lot of them that did not offer APIs to do this. There was also fraud detection but we kept doing manually and after a while it became really really painful and then we figured out how can we automate this to some extent and then we actually started to make progress but we started with just this thing and we did for for I think many weeks and in the end we got so tired and frustrated that we were looking for creative ways to get out of it. >> Yeah. I don't know if you remember but we had to ping an engineer in India to share with us the code that was sent via SMS. it was like so >> inefficient. I mean the end to end test topic is a fascinating topic. I think um there is something to be said also about automating them too much because the more first like payment and login flows typically are designed against automation. So that I think that's one of the reason we couldn't really automate them for a long time is because are specifically designed to counter that. Um and then the other thing is um when you automate too much those like end to end test then you get a lot of flakiness because you're not only testing your flow but in the case of this SMS based payment flow in India you're also testing the quality of the SMS reception uh any latencies there you're testing many different flows from the banks that you don't control. So those if those tests are con constantly failing then you get flaky test and flaky test gets senior ignored. So one one interesting challenge during the hypers scaling time was hiring and again most companies will not be in hypers scale mode but every now and then you need to hire really really fast. This might be because you get a big funding round uh or for other reasons. What did you learn about what works when you need to hire efficiently and you and you need to hire a lot of people often times experienced people. You look at the world process right like you look at every single step at the process as a funnel as a pipeline and you see where you do you have a leaky bucket. So from a metric standpoint, it's very important to have that. The other thing is you build a partnership with a recruiter. So many people don't do that. Um you have to build this this partnership so that you have a feedback loop where your recruiter knows what you're looking for and make sure that those candidates are sourced um are front screen from a recruiter standpoint and then get through the process. And then the the last thing I would say is like every single step of the process you you have a continuous improvement mindset. One thing you remember we did at tuber, we do it at cloud kitchens as well is is pairing interviewers. You have two interviewers um and this way they can give each other feedback. They can train each other. There is only so much you can do when it comes to interviewing training. And so to to pair interviewers, you can share with feedback between between the two the two. Um >> yeah, I I remember the two things that we did and I don't think many places still do it. It's like first we had a weekly catch up with the recruiter. So NG managers and recruiters we would do it's it's a bit like a team retrospective and and you know they would talk about things like what are their sourcing strategies like oh what kind of cohorts are they targeting or LinkedIn has this new feature that I can and they were share tips and we're like oh pretty cool or like talking about what kind of companies they're thinking of targeting and we be like oh what what about this one I I just heard that there I don't know there there there's some negative news coming from there maybe people will be interested of of joining from there and I've never seen that before and it was really good. And when we started hiring, we would just sit down with a recruiter, like spend an hour talking through here's what I'm thinking, here's what I'm what what this person would do, here's what I think a good hire would would be. And but the recruiters bring a lot of expertise. So they they would ask questions like okay you know like for example and and a lot of clarifying questions like I get so annoyed as an engineer when a recruiter emails you and said like I'm looking for 5 years of C# experience because if you've done you know Java Python whatever you can pick up C but we talked this through with the recruiter so they asked like what is your tech stack and we said okay Python a little bit of NodeJS back then we were using Go so they were like okay so we need people with expertise in these language we're like no no no as long as they did some language language, that's okay. In fact, we don't care about the language that much. And they're like, "Ah, okay." So, and because of this, we help the recruiter find better people or or or not reject people who could have actually worked out >> 100%. Like this relationship between the recruiter and the engineering manager is critical. And um actually something that came from Tuan Fan um which uh you you should have also on your podcast. uh uh he he told me that every morning he would come and and go and check out with the recruiters and say, "Hey, what what are some of the interesting profiles you saw uh this week?" And it's a great way to to have this feedback loop. The recruiter bring a lot of value um and have like their specific skill set and you know who you're looking for. You know who your team is is missing. So this is like if you don't have this tight feedback loop, it's going to take a lot of time to hire. >> Yeah. And then we had this this interesting thing with the pairing as as you mentioned with interviewers. We had well we we always had a primary interviewer and a secondary and the primary one would lead with the questions the secondary would sometimes step in uh and and you can discuss in advance and we had this concept of if someone signed off as a as an interviewer or not. And when they were not yet signed off they might lead the interview but then the secondary was a lot more experienced they would give them feedback afterwards. So after the interview they they would come and say like okay here's some feedback on you could have let the candidate talk talk talk a bit more. It was great that you you you jumped in and gave tips and so on and we tried to again this was something I've not really seen before. Most engineers I think are just not really good at interviewing and they they rarely get feedback and we were trying to get this this feedback going set expectations and we also did some interviewer training but but as you say it's hard it's hard to do interviewer training. >> Yeah. So we we we have the same process right now. So you have you shadow first the primary interviewer and then you reverse shadow meaning you drive the interview and then the other experienced interviewer will give you feedback. Yeah, you're absolutely right. Like just because you're a great engineer doesn't mean you're immediately a great interviewer. You can become a great interviewer with training and with on the job feedback. So that's why this dynamic is critical. So, how do you know if you're in hypers scale, hyper growth, sorry. Uh, and how might you know that you're getting out of it? Like what what again you you've gone through this at Uber. Uh, at Cloud Kitchens, you might have had some phases for it. What are kind of signs that you're like, okay, or this is just really intense growth and and I I need to operate differently in in this mode. >> Yeah, I would say it's a bit what we talked about in the beginning, right? like so you a lot of hiring and then things breaking left and right without giving you the time to fix like the fundamental root cause and I would say this is probably the best the best sign that you're getting out of this hypers scale is when you feel like okay now I have time to actually look at this singlistically and fix and and and build a system that I'm pretty happy with not necessarily a second system uh from scratch is not always a good idea or rarely a good idea but uh when you get into that territory that works >> speaking Speaking of thriving in hyperrowth, one of the biggest challenges is maintaining release velocity while avoiding outages as you scale. From my personal experience at Uber, this is hard to do. Most engineering teams think they have to pick one, ship fast, and accept higher risk or slow down releases to stay safe. But here's the thing. Hypergo teams at companies like Notion, Brex, and Atlassian don't have to sacrifice speed for safety. They use the same toolkit that powers Meta and Uber's experimentation systems and now it's available to every engineering team. They all use Stats Sig. Static is our presenting partner for the season and they built a complete data platform that lets you ship features and measure exactly what each change does to your key metrics. This is the same infrastructure that growth engineers use to run hundreds of AB tests per year. Every feature release shows you exactly how it moves conversion, retention, revenue, whatever metrics you care about. Built-in analytics, feature flags, session replays, the full toolkit. No switching between different tools or trying to stitch together your own measurement system. The gradual roll out approach is crucial here. Start with 1% of users, see how it affects your metrics, then progressively roll out to more users. If something goes wrong, instant roll back. If it's working, you can confidently scale it up. The validation here is incredible. Openai was such a heavy user of Static that they decided to acquire the company. Talk about product market fit. Static has a generous free tier to get started and pro pricricing for teams starts at $150 per month. To learn more and get a 30-day enterprise trial, go to static.com/pragmatic. What can you do when you're feeling really overloaded? I remember and you were my manager, but there was a time where you looked pretty burnt out. uh we had performance uh season and and a manager left uh the company and suddenly you their reports also report to you you had suddenly 30 reports and you had to do 30 uh performance uh check-in conversations. I remember that when you did the last one, the the few of us more more experienced engineers, we knew like we saw your calendar. It was full. And when you came out who were there were kind of clapping that that that that you're done cuz it cuz it took like two weeks. And I remember at the time like it was so you had 30 reports. You were expected to uh get still get stuff done. I think you were still doing some some hands-on work. And like you could tell that that usually you're you're pretty chipper usually, but back then you you were not chipper. What did you do to get out of this situation or to or or to push through it? >> Yeah, I would say there's two things. There's one uh personal productivity. It's a topic I've always been super passionate about. So in I've always invested a lot like the first book I read on this topic was getting things done um by Alan, which is a pretty I think it's also a section in your in your book by the way. uh it's a critical skill right like the the earlier you invest in your product personal productivity the more dividend it's going to pay and I really believe in compound interest right like you invested this and then the second thing it's a principle that works wonder everywhere it's divide and conquer so yeah you're right like I remember that time you have study reports you cannot do everything so now you're going to have to find people who can take over from you some of those topics so I remember asking you to to take lead a project for instance and It's it's a great way because you give responsibilities to people. Um you help them stretch outside of their comfort zone and you're saving time. Uh so it's a win-win thing. Um >> I I remember that you did this and what I liked about it is you were very explicit about the ask. You were like, I I need you to Yeah. for example, lead this project and I I don't have as much time to spend on it. Uh I'm expecting to check in with you once a week. If you need something more before that, let let me know. And uh and then I I think you also went more specific. I like you to make a plan on how are you going to do this, less check-in with it. And it was yeah so it was a way for for you to spend less time, give more responsibility and actually people liked it because people were growing professionally and they also understood the reason of why this added responsibility is is going to them. And if you're smart, which I think most people were, they understood like this is a good thing because with more responsibility means that I now have more opportunities to prove myself. I can now grow in a direction either to be a lead or or or or if if they wanted to be later a manager or into the staff direction and so on. >> Yeah, this divide and conquer is very similar to what we do with software, right? Like we create an abstraction, we give it responsibilities, we um design an API that is clean and then evidently some uh details might leak, some implementation details might leak. So with a project you're giving the responsibility of a project to somebody but some issues might require your involvement and that's all fine but then at the end of the day that's still what you expect from the person who take over right like you're expecting them to drive and to somewhat hide some of the implementation detail from you and yeah so it provides growth opportunities that's for sure one of my manager told me like one of the most powerful word you can or expression you can tell somebody is I need your help right like this is very powerful very simple right I need your help with this and and the employee feels also empowered to make decisions. >> Yeah. And by the way, I think you can do this not just as a as a manager, but also when you're a more experienced person on the team and you have someone who's who's less experienced and it does work magic. I I I think it it does create a sense of responsibility. I think we also tend to sometimes forget that as soon as you hire someone, even if they're a brand new grad, like this these are fully capable people. Like sometimes I think back on how in the industry sometimes we have the tendency to kind of over baby the the junior engineers like oh this person has no experience. Well if if you think back of like the people who built Facebook into the you know trillion dollar companies today there were 19 and 20 like dropped out of college. So all I'm saying is like like people have a especially the people who go through a pretty like high standard hiring process they actually have a lot of capability to do and I I sometimes need to remind myself of that when I was a manager or or a tech lead. >> Yeah. Yeah. Plus the best way to learn is to make mistake. So um you know if you want to invest in your people you have to let them make their own mistake. Yuki is still right one of your most popular GitHub repositories which keeps going viral whenever I mention it. It's called How to be a professional engineer. It has a a few tens of thousands of stars and is this collection of like really really good reading resources. How did that start and how are how are are you updating it? I understand that you started to do it for yourself initially and in fact you already did it when we were at Uber and you forwarded to people. You were like, "Oh, check this out. uh if if you'd like to become a better engineer. >> Yeah, I think the way it started is I wanted to automate the process of giving people feedback and sending them to certain good articles. Right back to automation. Yeah. >> Yeah. So I started compiling this list of topics and I put what I considered like the classic article on this topic. So really like articles that have really changed people's mind um that that are really influential and and kind of have driven like a lot of engineering practices. And then I started in doing this. So one thing speaking of personal productivity right that you can also invest in as an engineer is I have a good process for keeping yourself up to date with what's going on in the industry. And so I try to read I mean total one hour per day fiction work as well. We can talk about that later but I try to read like a a good bunch of like engineering articles every day and if I find a new classic article I add it to the repository. And I've been doing that for the past 15 years I think something like that. >> How do you find where to read? What are your sources? >> Yeah. So the main one is a news. It's >> a good one, isn't it? The best. Yeah, it's still the best I can use with I I received like um an RSS feed with a 10 top article of the day. Plus I check it because sometime you have like some >> uh diamonds that don't necessarily make it to that uh top 10 list. There are some really good newsletter uh the byes one for for front end engineer is it's hilarious on top of this like every time I have a sh a chuckle because it's like just so well written. Uh I also follow like newsletter about the programming language that I use. So there's Python, Java for for CK. Uh yeah, a lot of the software tech lead um is really good as well. >> You were an engineer yourself. You manage a lot lot of pretty good engineers. Can we talk about a story of of a standout software engineer and what made this person stand out? >> Yeah, there are so many so many good stories. I don't I'm not sure I want to single out somebody in particular, but some of the traits of like an archetypically typically good software engineer. I would say there's a couple things. There's one shipping, right? Um so I've written three or four carrier letters. Um and a lot of carrier ladders in the industry they after senior they overfocus on meta work reviewing RFC's attending meetings influencing that strategizing this. It's really difficult to understand what's going on. So the first quality and we try to put that in the cloud kitchen's um carrier ladder is the focus on building shipping value being creative being an expert in your uh programming language in system architecture. So that's one shipping is really key right. >> So even at the above senior like the staff engineer level like you still place focus on you still need to ship things you need to get into production. >> Absolutely. And we expect for instance staff engineer to really find creative ways to speed up execution or achieve a 10x improvement in quality. Right? That's that's what we expect. Not only reviewing RFC's or attending meetings, right? So so that's one shipping and and lifting right lifting people around you. This is critical, right? like we are in in knowledge we are knowledge workers essentially right so you have to train people around you you have to give a hand you have to help you have to have a good attitude I think the best engineers uh that I've worked with have this amazing attitude of ownership of taking a problem and not stopping at team boundaries for instance a couple stories here but like a software engineer was like identifying a problem uh and they're like a mobile engineer and then they go into the API gateway they see the problem is not here and then they keep going. They go to the back end and they actually make the fix in the back end even though they're a mobile engine, right? And by the way, the amazing thing is in this age of AI, you can really uh bootstrap that with AI, right? >> I I I feel there's starting to be no real excuses because before AI, you could say it is hard to on board. It's hard to get to know it. It it takes time and effort. And I mean it still takes time and effort to become an expert but as you say like to unblock yourself like at at an age and time when a product manager can open a poll request that might be accepted. I think there's no I I I feel engineers are necessarily becoming full stack with an expertise in their home stack. >> Martin Fer posted a brilliant article I think about the expert generalist. Um, it's funny because he mentioned that this T-shaped person that we uh often talked about like this person who's an expert in one thing and one thing only >> and then like broad >> it's comes from the Valve software on book uh really >> but I've never seen that in real life and it made me realize that actually right what you're looking for is more like a rake somebody who can go deep in a lot of different areas and with AI you're absolutely right like for instance one of the best use case for AI I've seen so for me is like there's a problem with this feature. Find me where the code is and then I read the code and I I understand or or maybe I I need to understand how much time it's going to take to fix this or that or to implement this or that. Find me the code where it is and we use a monor repo um at cloud kitchen and and that helps right everything is in one spot. So you can ask those kind of question and get this uh this kind of answer. >> And so what what other characteristics? So shipping uh ownership. >> Yes. Lift. So ship and lift. Yes. >> And I I will add there are so many things right. But I will add one thing uh humor. Taking yourself lightly. Uh it's really important. I mean evidently not limited to to solid engineer but I think humility and being able to be to self-deprecate yourself. It's it's pretty >> I I like how you put it cuz I feel if if if you have selfdeprecation humor a little bit, it just it's hard to have a big ego cuz I I feel that's the thing that gets in the way of of some engineers who are really stand out and really smart, but they become insufferable if if they keep putting their ego ahead of them. They're and it goes against everything. They're not going to lift others if if if they're if if they don't have that learning mindset. So yeah. >> Yeah. And and also like sometime you have those debates, right? like so you have those debates on RFC's uh on code etc and and you see some tension and some conflicts you more diffuse these things right so it's a it's a really powerful move yeah there are so many other things like structure and method I think a good solid engineer has like a method in place for fixing problems um it starts with observability metrics right um a good story here would be uh we're an incident you're debugging you everybody thinks like the problem is in system A and then you have like a a really creative engineer who says actually it's here and we are all looking in the wrong direction and here's a super quick way to mitigate this problem. This is like also solid solid engineering. It goes back to the shipping thing. You can only do that if you keep close to the code. Now in this repository you have a lot of lot of really good articles and I know I love how it starts that I would like this repository to not be over overloaded. Uh so I I will limit to the most uh important impactful documents and again about 10 years ago when you sent it to me it was a lot lot smaller but now it is becoming a little bit overbearing just there's so much reading right there again it's I really appreciate that every one of them is is worth reading. What is your take on the importance of reading versus doing? Because you know if you read all that article like I'm sure it'll help you to some extent but let's be real if if reading can make you a great engineer then anyone could just read the whole thing. How do you think about balancing this thing reading doing? >> Yeah absolutely. So evidently this repository is not designed for being read like head to cover right like cover to cover. um you you you are confronted with a topic and what happens is usually during the day you're in doing mode right so you you need to get to results I think people will start looking at you sideways if you start taking a book and reading a book on the job like people usually don't don't really see that super well so what you do is during the during the day right like you try to make it work and you're going to try many many things you might use AI to unblock yourself right and then what you can do outside of work time um is read a book about the fundamentals of the topic you're confronted with because when you're trying to make things work sometime what you're missing is like the fundamental of this topic and if you add them it would unblock you and consider cons considerably speed up your process so that's why reading and doing they go hand in hand right um and you have to do both >> yeah so you're you're saying a good way is to read related to the problem area that your head is in anyway right >> yes absolutely And yeah, reading outside of work. I mean, I know I know people might be concerned about saying something like this, but I would say, hey, uh, you we usually spend a lot of time on our screens. So rather than spending mindlessly 30 minutes uh on a social network, why not reading for 30 minutes? Uh, and by the way, you don't have to always read technical books, right? Like there are some really good nontechnical books that are also useful. One of the best one I've read is Complication by Surgeon Atul Gande. It's an amazing book about how do you learn from mistakes and and how to have a scientific process when it comes to to learning from those errors. And there are so many other books like that. Another good example is like all of the >> uh case um incident postmort um aviation incident postmortm. You learn a lot of things about the human component of like a a an outage that are very applicable to software incidents. >> Yeah. Especially cuz in software incident reviews we we try to keep them blameless. So we keep the human part absolutely out of it which I guess it's good in in some ways right there's no fingerpointing but it can miss the the very human nature of of like I mean we're people we make mistakes it it happens and it's kind of to be expected as well. Yeah, blameless is very important and the intent is right but sometime it's not well executed and when you when you completely remove the human component I think you're missing 80% of the incident potential mitigation. >> Yeah. Like like like there there's there was this incident where I remember it was it was a small outage at the middle of the night at 2:00 a.m. The engineer woke up and turned it into big outage because I mean he you know like like they were just tired. It was freaking 2:00 a.m. They they thought they were doing the right thing, but because it was a blameless postmortem, we kept going about how the system could have prevented this. And I'm like, I'm sorry if you're like what what could have prevented this is is like not having to make a decision or or or that engineer not deciding to try to mitigate it on on on the spot. And I mean there are some things but like there was that context of like look like when you're waking up in the middle of the night like either it it could have been okay to ask for help or it could have been fine just to leave it and kind of come back into the morning. But but again because like this was one downside that I I saw and it it it doesn't always have happened like this but in this case that very important thing was missing as as a tired person you're going to make mistakes. >> Yeah. and and being too blameless or at least not involving the human component removes the opportunity for finding training needs right and as an industry compared to all of the other industry we do so little training so for instance I was talking about like the feature flag if somebody's never been trained on why it's a good idea to use feature flag how to use them what are the internal tools that you can use what are some case studies sure they will not use it and it's a huge missed opportunity so if you if you don't cover those aspects in an in a postmortem, you're missing those training opportunities. And yeah, we need to we need to also be completely okay saying, yeah, mistakes happen and it's a way of life. And as my mom used to say, only those who don't do anything don't break anything. >> Yeah. Speaking about training, I I had this really interesting experience with you. I I let's see let's see if you remember. So back when I started at Uber, we were in in hyper hyperrowth mode and we had a lot of projects going on and and engineers needed to lead projects. We we just didn't have the product managers, didn't have the capacity to to lead them. And so you decided like, okay, well, since everyone's leading projects, we might as well get people some training on what does good project management look like because we're all software engineers. We're not the experts on this. You looked and you found an expert on this. you brought that person in for a two-day training and at lunch break all hell broke loose. Engineers were like, "What the hell is this?" Like this guy like is talking about some old school thing that doesn't work. The roof was so big we canled the the training. And I still think of this of like what went wrong or like could have we even done training or were we just doing things that was just not trainable or would have we needed to do our internal training. Do you remember this? >> I absolutely it's funny. I absolutely remember this. It was a very humbling experience because I did choose um the training company. The main learning for me is you have to build your own training. Only you know what your team needs and the mistake I made was uh to to have somebody else come in and do a training uh >> because because you spend a bunch of time selecting like who to bring in. >> Yeah. Yeah. No, no. I mean a big mistake for sure. Um since then very simple. or I just be on my own trainings. I know exactly what the team needs. Uh and the other thing is I keep on improving it. The true thing is like project management is not rocket science, right? Like no, we could go into details but it's essentially managing the trade-off between time, cost and quality and once you've said that now you need to apply it to your team, give case studies, uh have people engage and only you can build those those trainings, right? >> And then that's what we actually did afterwards. I mean I don't think it was intentional but I I started doing this. I started to put the document together for my team saying, "Okay, here's here's how we usually do it. Here's examples. Here's like pointers to internal sites. Here's how we try to keep it as lightweight as possible." Because I think that's the thing with with a lot of these things. I I wonder if this is why training is possible. You as an engineer, you want to make the process as simple as possible. You need to have some, you know, like I don't know checkpoints. Sometimes they're optional but in the case of project management for example I think the only thing we had is okay you know kind of it's it do some planning up front decide how the h how you want to do it it's up to you do you want to do a big meeting do you want to do whiteboarding doesn't matter put together a plan make that a document send that out for comments that was like kind of a fixed thing and then I think the only thing we mandated is have a kickoff meeting because we realize that when we don't have a kickoff meeting the projects don't go that well and then I think the third thing we said is like every week send a weekly summary to your team, everyone on your team and a few stakeholders. And then and there was a bunch of recommendations in between. You could do this, you could do that, but these were the three things. And we did it because we over time we figured out this works for us. It it might not work for another team might have added additional things. Again, I know some teams they mandate, let's say, retrospectives and all of those things. But and in the end, every team kind of came up with their own process. Some teams kind of forked it or, you know, copied this document. They added some things. because they remove some things. >> Yeah, you have a ton of good content on your blog about like project management. Maybe one thing that most people don't realize is this weekly update thing, which is something I learned at my first job, an internship in South Africa and my manager was sending this weekly update to almost the world company. It does so many things, right? So, one, it forces you to um to be explicit about your goals, your success, your low lightss, what you can improve, right? So, so that's great. And two, it's a good perception tool, right? It's incredible how just sending a weekly update drives a perception of movement. Um, and it's critical, right? Like you have to manage the perception of your work. It works for a team. It also work for individual contributor. You've been tasked to do this or that project. Very simple move. Send a weekly Slack message. This is what I'm planning to do this week. This is what I did last week. That's it. pretty much like a standup and maybe some key metrics, right, in writing. It works wonders. >> Yeah. And I I really liked how you suggested that we do we did we for every week we did like the status of you know like is it on track like when are we planning to have the next thing shipped or whatever that was uh highlights the you know some good things and then we also had low lights. We just tried to keep ourselves honest. And what I actually found is by adding low lightss and by being honest and of course with low lightss when there's like something at risk like mitigation, what are we doing about it, it just kind of forced conversations early, we didn't really have surprises. When a project was late, we knew it well ahead and and when we didn't, we we then talked with the the team or the engineers saying, "Did you actually notice?" And usually turned out that they kind of knew they just didn't want to bother with it. But by having that trust, again, it wasn't about like, you know, you're going to get in trouble for for being late. It's like, no, we just like to know. So, we can help, we can talk about it, we can we can talk about the trade-offs because it's all trade-offs, right? Like with project management, you can you can always decide when a project is is looking that slate, you you could either cut the scope, you could try to pull in more people, you could just extend the timeline. Yeah. I mean, that's I think that's the three things you can really do. >> Yeah, absolutely. And the low lights is a important section. I don't always include it frankly but one thing I always make sure to include is uh ETAs a deadline is the ultimate source of inspiration right because the thing if you look at uh the cost which is essentially the engineering time it's always a dynamic thing how many engineers are working on the project is is not not always like super easy to measure the quality which is the scope and what the project is aiming to ship is also very dynamic like you have micro optimization at the feature level that people might not have the eight super objective, right? We said we would ship on that day, we're going to be two days late and then the discussion about is it okay? Should we cut scope increase the number of engineers which does not always work right medical um but then yeah those those dates and making sure they are in your weekly update ultimate source of inspiration when it comes to project management um trade-offs >> and I feel that getting better at project management managing your own your own project that you're working on whether it's a one person thing or or a few people it's a superpower as a software engineer because I think as software engineers we think that okay our job is to like write code, but actually our job uh is to to solve problems and to ship solutions to to problems that the business has made that be new features or products and you become more experienced and frankly you know higher paid when you can take on more complex things and there's only so far you can go without project management. So I remember that at at Uber we there was some push back some people said like I don't really want to do project management and I was like well okay so I guess you two things can happen like one is someone else can do it and you'll just you know you're not going to do it or like two we could bring in a dedicated project manager we could literally hire a person and they will be the project manager nontechnical they're going to keep pestering you for updates you know you're probably going to hate them >> and and but a lot of places still work like this and and some of it has to do with some people are just really resistant to I don't want to do it. Great. Well, you you might actually be worse off. >> Yeah. I've never worked at a place where you have like those nontechnical project management manager like you I've usually you have this TPM role but it's most effective I think when you have like a project that impacts 20 or 30 teams. Very useful. But then when it comes to like once again it's not rocket science. It's like a weekly updates, making sure the project is on track, making sure it's still going to be delivering the value that people are expecting. And I would even go as far as saying that project management is one thing, but I would also recommend getting into product management, right? Because the irony of the software engineer's work is um there's a lot of blame, a lot of things that can be blamed on the software engineer, right? Like a feature is buggy, hasn't been tested enough, there's an incident, a wrong architecture decision was made. And it's very objective how you measure. There is one thing that is relatively difficult to measure and that most companies I think don't do a great job at which is making sure a product is doing the right thing and was the right strategy in the first place and is shipping the right value. So as a software engineer if you don't do that you risk wasting not days but weeks maybe months of time working on a feature that is actually not going to move the needle for the customer. >> Yeah. Or or or even years. I think we we see products that even like larger companies bring out uh which are just like big flops and they put so much time and effort into this. I mean we >> we could even see it at Uber uh projects that were were killed. I I still remember to to this day date we had a team who were doing jump bikes and they were integrating it into the Uber app and and they were changing some parts of our of our stack and they came in and they didn't add tests like they added a feature or something and they didn't add tests and we kind of got into standoff with them because we're like I mean you know like our our quality bar it needs to be test in fact I I think I think they deleted tests even worse and at first I was like like this is not how can they call themselves software engineers like we're at the same company if if they they miss these quality things and for us it was really important because we knew that uh we want to have stability uh we have the real world impact and so on and after a while when I talked with them I started to understand that they're coming from a different angle they were not sure if their team would exist in two months and actually it it didn't in like six months it was kind of sold off and and uh or moved to a different entity but they were just in a different stage they were in in in survival They were like, "Let's just get it to work. Let's test it if if it's there." And this is what what made me realize that there's just different levels of software engineering. You know, when when you're working on a mature project that's making a bunch of money, which was us, you have different goals than when you're this was a startup, an MVP stage. They weren't sure if it would even work. >> Yeah. The the time cost quality management is very different depending on where your product is. So, if you don't understand that, you're not going to be making the right decision. in particular when it comes to uh to to technical depth, right? Taking on technical depth is a perfectly valid thing to to do when you don't know if your team is going to exist next month um might be much trickier if you're building like the architecture for the next few years and I would say especially the case for internal projects especially because internal projects have um an internal platform usually don't have product manager so in a way you are also the product manager. How would you suggest software engineers to to learn to to kind of make a balance of on on tech tagduff on how much to focus or or how not to focus or what is kind of kind of your mental model of deciding like how important is tech depro or not cuz again I've seen most engineers I think that care about the craft can kind of frankly go overboard on this you know trying to fix it make it perfect when honestly it doesn't matter all that much but at the other side as well right we have these kind of like kind of hack hacker mentality who just gets things done and yeah It's it's a mess. It's it's spaghetti code. It's hard to maintain after. >> Yeah, I would say one uh two things. One, it's tech. If you know it's tech if you don't know it's tech, it's just recklessness, right? Uh it's moving fast and and not knowing what you're doing. It's I think we call it vibe coding those days, right? That's not tech. It's just recklessness. So, you have to know you have to care a lot about your craft to know where you're taking a shortcut. And two, I would say condone the technical depth, right? Like hide it behind a clean interface. Use good patterns to make sure that then you can revisit that technical debt with minimal changes and keep track of it evidently. >> One thing you've done a lot is hiring. You've hired so many people o over time. Uh right now the job market is is kind of bouncing back a little bit, but it's still pretty tough out there uh as as software engineers, as engine managers to to get hired. What is your advice on on a software engineer let's say experienced software engineers to get hired? How would you suggest that they prepare? >> Yeah. So, first you're absolutely right like the press and the media is full of articles about lowering uh like a difficult job market for engineers. You shared an article yesterday about it. It's not what I'm saying. We are hiring for instance. So um I would say don't don't listen to those uh to those like news. I mean there are so many advices on the internet about preparation and things like that. So it's it's difficult to say original things about it but I would say definitely is to have a structure and a method. you come in most of the interviews you will have are technical interviews and then if you have a good method if you have a good structure the interviewer even if you don't succeed necessarily at solving the problem at hands will see that and will reward you for that I think so for instance for algorithm it's to start with a brute force solution and then to keep improving and optimizing right for a system architecture interview it's to start with a product and then go with like the requirements technical and nonfunctional and functional and nonfunctional and then start with an architecture etc. focusing on the trade-offs and things like that. So having a structure method I would say definitely super useful. Then there is a topic of AI. Do we want to talk about AI and I think top of mind. Yeah. >> Yeah. But but before we get to the interview when when you're hiring on on your team, >> I guess it starts with either applications coming inside or you also have so-called outbound sourcers who um you talk with them and you tell them here's this type of profiles I'm looking for. What could help you being discovered? Uh, I guess this will be your LinkedIn profile. Uh, and and also for for résumés, is there anything that again there there's been so much advice? I even have a book about résumés, but but is there some résumés that like you know you think are are better than others? >> The main thing people will look for is what are the experiences, right? Your experiences, which obviously is not a valid advice for for people who are just out of college, for instance. But I would still recommend that they um promote like the projects that they've been doing. And for instance, >> well, and also I think this is important because when you're choosing your next job, if if you have options, you could think about what will help my resume stronger, all things being equal or or similarly equal. I I had this early on in in my career when I had the opportunity to join Skype. I I actually like I it was the first job that I didn't take any payrise. It was the exact same thing. But I would have taken a pay cut to do so cuz I I knew that that like that was the first company that I actually recognized. And later once I got there, suddenly messages started to come in cuz I I was now at a company at the time. It was it was well known. You know, nowadays it's discontinued, but still. >> Yeah. You pick the right first expenses. You're absolutely right. Like you you came up on our radar because of like the expenses that you had. So I would say go toward companies that are known for the quality of their engineering because that's what that's going to be the main feature as far as like resume and LinkedIn. I mean don't overthink it. Most people don't spend too much time on it. Don't use AI. I would say don't use AI to generate your resume. I've spent the last few days reviewing 30 resume. 25 out of those 30 were AI generated. Some even had in the skills attended standups and there were a lot. So now I can tell like an AI generated resume it has a lot of like those relative number improvements. So there's going to be reduce deploy time by 30%. And no absolute number >> which reduce by 30% I mean if you reduced it from two days it's it's yeah sure is probably it's not very impressive 30%. Right. So so don't use AI. Uh, keep it simple, keep it short, keep it creative as well, right? Like um we like sourcers and hiring managers and recruiters, they review hundreds of resume per day. So if you want to stand out, be original, be creative, don't use AI. >> Yeah. And I I think maybe like as you're looking, it it could be worth looking at other people's LinkedIn profiles and just making notes of things that you think are kind of like cool, original, and like you can use some of that as as inspiration. and and and this goes back to the choosing the right company to work for which is what are the technical problems that are going to really uh move your career forward. So that's why also it's a good idea to prepare questions for your interviewer. Um and one of those questions should be what are the technical challenges that I'm going to be exposed to because then it's give it's going to give you a good direction for your career and you're going to be able to check whether this company makes sense or not. And then once you're on the interview and you're talking with with I mean the technical interviews I I think that's pretty pretty straightforward. You just need to be good at it. But when you're talking with the hiring manager, someone like yourself, you know, like who are people who do pretty well on on the interviews? I'm I'm assuming don't ramble for example. >> Yeah, I would say don't ramble and and also short answers, right? Um the dynamic of the interview, it's the same here is like somebody's asking question and driving the conversation. It is what it is. So if you if you use all the time like the interviewer is not going to be able to ask that question for the hiring manager I mean it depends if it's a sell call or if it's an behavioral interview >> and but by by sell call we mean there's calls where you're not sure if this person is interested but you you would like to convince them to to apply typically for more high-profile people often times industry known maybe people who are at a company that is known to to have really good benefits for example someone working at Meta right now it's kind of hard to sell them because the stock is just going upward. So their composition might might have gone up, right? So that's a sell call. >> Exactly. Um and and then you can prepare by asking smart questions, right? So you you research a company and you have a good list of questions to ask. I think the best candidate the best candidate like prepare the interview even those conversation with the hiring manager. And what this says about the the candidate as an interviewer, it says that if this candidate prepared the interview really well, that means that they're probably going to also prepare their work really well when they are at the company. >> It's kind of surprising that we even have to talk about these things. But I talked with a recruiter at a publicly traded tech company. There's a few thousand people there. They're a well-known company. I just don't want to say the name because I don't want to put them on the spot. But this recruiter said that they were recruiting for director and VP level candidates. and the people who were coming in did not know the company's products like this company makes a bunch of different products and she said that like 90% of people just didn't didn't do the basic research of like and we were we were not talking about software engineers so I I think I'm going to just plus one that that when you get to an interview first of all it it will be a red flag when you have been ignorant and you actually interesting enough you will stand out by by putting in a little bit more than the basic preparation which by the way could just be interesting to learn about a different business. What they do think about what their business model could be. This goes back to uh you can actually use this later when you're uh in a lead position. Maybe one day you'll become a founder. All useful information. >> Yeah. And you have to manage your career, right? So if you don't know anything about the company you're interviewing for, you cannot know whether this company will actually empower your career or not. So this preparation is also in your interest. >> Well, yeah. and and also preparation is in your interest to to see how the company is doing for the example financially and business-wise. It would be nice to know if there's a likelihood of of the company doing cuts in the next 6 to 12 months which which might be hard to tell from the outside but if you do zero preparation on the podcast uh I had John Clara who's uh at OpenAI software engineer she interviewed 46 different companies and she researched all of them and then she made this list of like based on uh she actually researched their products. She tried to use it. She looked for Reddit reviews for all of these to decide finally where she would go because she wanted to go to a company where has a good trajectory, you know, her stock could be worth something, etc. And I realized, well, yeah, I think more people should do that. >> Yeah, if you're that careful about your career choices, I think that says something also about like your qualities as a software engineer. We were talking about like a previous colleague of us, right? like who has the same structured mindset when it comes to picking their next startup, right? And that applies to everything. You have the same structure and method for picking your next uh carrier opportunity and also to pick your next technology to use in this or that architecture. >> So with AI, how is AI from your perspective creeping into the hiring process? It's everywhere, right? you already told me that you can you can already see the interviews sorry the the resumes that have been like written or or prompted by AI but what what what else uh are you seeing? >> Yeah, we see it a lot in interviews as well. Uh so for instance we explicitly ask not to use AI. It's actually quite interesting. I think we'll go back probably to on-site interviews in the industry a lot more than we used to. >> It seems unavoidable. >> And by the way, this is not to say that AI is not a good tool when you're on the job. is just a really bad tool when you're interviewing somebody and you're trying to get signal because you don't know if if like you're only testing their prompting abilities which is a very important skill but it's not the only skill. So, so that's one I would so I would say use AI as a coach, as a trainer, but certainly not as a cheating partner. And the trainer, it's it's a really powerful move, right? Like you can for instance, you can prepare those questions about the company and ask whether those comp those those question make sense and and you can also prepare you can prepare an architecture interview. There is so much you can do to prepare and train yourself, but I would not use it during an interview. you. So, you you've worked at a couple of fast-paced startups, Uber. You later worked at other startups. Uh you're now at Cloud Kitchens. What have you seen people do to be successful at these startups? I've seen when when I hired people at Uber, a lot of them hit the ground running, but some of them struggled because it was just a big pace change. And these are all fast-paced startups. And, you know, hyper hyperrowth is probably over, but fast-paced startups are here to stay. In fact, what I'm seeing with startups of today, they're even they're even faster. They they ship faster. they they they get to product market fit faster. What uh what have you seen engineers do who do well in this environment? >> I would say first like uh personal productivity helps a lot. So investing in this early in your career so that then you can hit the ground running really rapidly and you stay structured and methodical when you're on board on a new company >> and and sorry by saying investing in personal productivity. So figuring out how what works for you like what kind of methods may that be to-do list or pomodoro or or focus time or whatever. >> Exactly. Yeah. Yeah. So how do you keep track of your commitments? Uh how do you keep track of your reading? >> What worked for you or what have you kind of what phases have you gone through in this sense? >> I've mostly use a method called getting things done. Uh which is not rocket science fairly simple. Um and then I've used the same tool which is called things for the past 15 years. I think a plain to-do file and then I also use a GitHub repository for my personal notes that's called PKM um personal knowledge management that helps a lot as well and I try to memorize stuff as well with flashcards Anki those are things that I've used consistently and it it pays dividends still to this day. >> For flashard what kind of things do you memorize? Uh so for instance I memorized the whole Python standard library and not everything but a lot of it and it did help a lot in the beginning and then you you memorize like architecture patterns uh data science methodologies um >> love it >> standard library of your current language or like some some issues you have um bash commands that you use u consistently. Yeah. So, so I would say personal productivity does help reading fast and being able to synthesize and summarize like document uh really >> and I guess the best way to do it is to to to practice, right? So like read >> read exactly definitely prioritizing meeting with people and understanding their context because there is only so much you can find in the internal documents and in the call right so understanding the history of the technical decision helps a lot when it comes to maintaining that platform later on. So, I'm not sure if you recommended this or or not, but at Uber, one thing I started to do after a while is meet when I became an engineering manager, meet other engineering managers on the team or when I went to for example San Francisco, we were based in Amsterdam. Meet spend that time to meet with them in person and then starting with like I I did a short summary of short presentation of our our team, just a few slides to say here's what we do. It it was meant to be really short, but it was just a conversation starter. Then I asked like, "Hey, what do you do? How did you get into Uber? What are your plans after?" And then as we warmed up, what are your plans after Uber? Like where do you actually want to go? And those things have become really useful. And and I realized like as as an engineer, I probably should have done more. And I saw some of the best engineers, they actually they actually just met in person with another tech lead or another another engineer on the team or or grabbed lunch with them and it makes all the difference. >> Yeah, I think AB talked a lot about it. I think it makes sense. >> She she might she might have been the instigator. >> It makes a lot of sense, right? like you're going to work with people, you're not going to only interact with code. So getting to know them on a personal level is critical if you want to work well with them >> and they'll just be nicer to you >> and it's nicer. Yeah. >> And you'll be nicer to them. >> Yeah. Yeah. So So that's why coming back to this like humor and self-deprecation. You you want to work with colleagues who are really good at what they do. That's for sure. But you also want to have fun, right? Working with them. Like it shouldn't be like a an annoying thing or frustrating thing. So the best way to do that is to get to learn because I'm convinced that everybody has like an interesting story and and stuff you can learn from them. So >> yeah, and I guess one one thing like in in a in a fast grow company like conflicts are a bit more common just because people try to get things done. there's a lot of email or Slack messages and sometimes, you know, they can come across as aggressive or passive aggressive and I still remember to this day where this was again back at back at Uber, we were emailing a lot back and forth with the San Francisco platform team and there was this guy who was just an absolute at times just like in in in the response. And then once he was in Amsterdam and he was such a nice guy and but once we got to know him, you know, and someone asked like why did you write this? Like oh I I just write really fast like it was really late at night. Turns out he was writing at 3:00 a.m. and he wasn't caring too much. And I like every now and then I I I get back to the point of like well I guess software is often times more about people or at some point people start to become really important especially inside a company. >> Yeah, that's why my number one advice when a code review is not going well or like an IFC review is getting into tons of common is just meet right like meet meet face to face. uh you're going to hash this through in less time than it's going to take you to have all those exchanges and frustration and you everybody's second reading the thing I mean let's be honest also a lot of people in startups are like English as a second language so we don't necessarily understand all of the nuances of the language contrary to c right >> and also some people cannot express their own nuances >> or or they have they have a hard time especially in writing >> yeah no no yeah absolutely >> what are some non-obvious recommendations to thrive inside a high growth startup. >> Yeah. Not obvious. Um, so I talked about memorization. Um, yeah, using flashcards to memorize stuff. Very powerful. Um, and second, I'm not sure if it's super nonobvious, but extreme ownership. So that means you you feel like the team is yours, the project is fully yours and really going out of your way to understand all of the dependencies and and overall context it fits in is really critical to be super successful. I talked about like the best engineers don't stop at team boundaries. They will keep going. Our industry, our work is mostly about building abstractions on top of abstractions on top of other abstraction. In reality, there's always leaky abstraction details uh implementation details, right? So the more you understand about the stack you rest upon, I think the more effective you're going to be as an engineer. It also applies to your tools. It applies to your programming language. It applies to everything. >> Yeah. And then one thing you used to say a lot is underpromise and overd deliver, which I I guess is easier said than done. >> Yeah. This is where also like the the weekly update helps is like you set your timeline and if you find like a creative way to to reach a timeline faster, it's a pretty powerful move >> when it comes at working at high scale at fast-paced startups. A lot of people have imposters imposter syndrome. Did you have this and and did you see people have it and how did you work around it? How did people conquer this who you work with? Yeah, you you you always have this. I think to this day it's a good thing to have it actually. Um we often talk about the imposter syndrome as a bad thing, but I think actually it's the engine that drives your curiosity and is like what's going to move you toward uh essentially like improving yourself and having this continuous improvement mindset. What's really interesting also in the world of like startup mindset and culture and things like that is when you look at it I've been really interested in in ancient philosophy and stoicism and when you read those like ancient philosopher reading it's still very applicable to us today right which is don't over focus on your emotion but like just get at it right like just just try to work on those aspects so that you become a better engineer I'm just I want to do just one quote which is Dan Heler which we both know is a really good way to phrase this thing. I think imposter syndrome is underrated. A lot of talk goes into overcoming imposter syndrome. I say embrace self skepticism and doubt yourself every day. What else to add? Right? Like it's a it's a good drive. It's a good engine. even Twan um city of Uber like acknowledged it and I think it takes a lot of humility to to say that you feel like you have imposter syndrome but it's a good thing and I guess you typically have impulsive syndrome when you feel people are smarter around you where you feel you might not be fully up to speed with wherever you are which is typically at a high growth startup like it's going to be rocket ship right so I I guess it's kind of natural to feel that right in fact if you didn't feel that like you might ask are you at the right place is this really truly a high growth startup which is which has ambitions beyond your own ambition. >> Yeah, you want to be impressed by your interviewers, for instance. That's a good way to know whether you're going to be at the right company next. >> What are some non-obvious reading recommendations you might have for for engineers who want to get better? >> Yeah. So, I would say read the fundamentals, not necessarily only the most recent technical books. Right. So for instance, one of the books that I non-obvious book I would say is like the L the Linux programming interface which goes into what is the API that is exposed to by the kernel. It's fascinating because not only does it explain the kernel and it's super useful uh because most of our stack >> is built on top of Linux. >> Exactly. So that's one and two it also goes into the historical technical decision. There are some really interesting algorithm uh as well like why did the kernel choose a red black tree for instance instead of a hashmaps uh for for certain data structure so it's really interesting really fascinating >> so fundamentals fundamentals about your programming language adjacent fields so I mentioned complications by at gande fascinating read and I would say fiction as well read a lot of fiction first because I mean it's not all about like work and also because it trains you to be a better writer reading good fiction is a great way to improve your English, improve your your writing skills as well. >> And I guess your your reading skills, you mentioned how it is helpful to be able to read fast and digest information fast. And I guess is is like going to the gym and you know working out, you're now working your your kind of mind. >> Yeah. Uh I've also been reading a lot more philosophy lately. I mean evidently software engineering is a very scientific endeavor but there's a lot to be said also about the fact that we we handle concepts so it's a very conceptual work that is very similar to philosophy and so for instance I listened to your interview with Dr. out about the philosophy of of software design, right? And he mentioned that at the core of software design, you have this decomposition of problem. Well, it's a core it's a core of human thought. It's the core of reason, right? Like reason is about distinguishing between different concepts and I mean it's not an irony that there is philosophy in the title of this episode because it is what it is, right? Like philosophy is exactly that. It's distinguishing between different parts. That's that's really what classical physopy is about. >> Yeah. In fact, when when it comes to for example designing a new system, you you need you kind of need to go go down a path like you can you can argue if it's a philosophy or not if if you're doing microservices or or monoliths, but the argument can can turn into kind of philosophical against there is no one right or wrong. And I guess at some point you need to go with one of them. It's it's fascinating how there are some parallels. >> Yeah. when you're writing an RFC, you're making a case. So then there is a big component of logic. How sound are your arguments? So for instance, at cloud kitchen in in the competencies, we have a big section about truth and we have one competency which I like pretty much which is called QED, right? >> QED >> a quad era demonstrum what needed to be uh demonstrated >> and it focuses on how well do you make your case? Do you eliminate bad arguments? Because when you read an RFC and you have like one good argument in the middle of like 15 bad ones, you feel like, yeah, I'm wasting my time. As a reader, it helps you uncover that. And as a writer, it helps you realize that so that you focus on this good argument that you have and you make it really strong and it really helps your career. I think >> it almost feels like if you're a good debater and you're also good at laying out your arguments in a logical way, it helps you be a good software engineer because we need this, right? It's it's it's a logical field where in the best people like obviously there's a human part, but you need to build up your arguments in a way that's understandable, easy to read if it's in a document, which a lot of it is. Yeah, there's two virtues that are really important and I think underfocused in the startup world is courage like the the strength to go against difficulties and maybe opposition and humility because also when you're focused on logic and truth you can also say hey I was wrong I mean sorry this is or like you you you make a case somebody comes back with a really strong rejoinder and you say well yeah you know what you're right this is not the right architecture and it's a great way to grow right >> yeah and The best engineers always shared that no matter how senior or experienced they are or how fancy title they had, they had an open mind and they would change their and they would go like okay yeah you're let's build this idea and or let's go with your idea zero vility. A lot of the advice that comes out of of you know like people who worked at Uber at a hypergo time or today it'll be at open AI we could argue a little bit is that survival bias or or not? >> It is I think it is uh it's definitely survivor bias. you listen to us for instance and we're going to talk about all of the things we did to fix uh some of the chaos but in a way you could argue that the chaos is probably why the startup was successful in the first place right the the speed up that you get when you're focused on shipping definitely has some trade-off in terms of quality and maintainability and reliability but I would submit that in most cases it's probably not what what you're looking for. You're looking for to first building a product, building a successful business and the rest comes after. And I think the industry has a tendency to overfocus on what comes after the quality, the investment uh in the in the internal platforms. Well, actually, yeah, you you it's fine in the beginning if it's not standardized, if it's not that structured, if what you get in exchange is speed. >> I think Uber had this thing which we just got rid of just as I arrived. It was called the ping. Every every 5 seconds or so, the app would send a ping to the server. It would it would ask me, send me the data and the the server would return a blob of of information, the the the waiting times, the EDAs for for the different like, you know, like different categories and and a bunch of other things and the app would display that data. So this was a pull mechanism and initially it started off with a few pieces of data and and they did it because the mobile developers were tired of waiting on the backend developers and they could just add things into this ping package and it became a pretty large package and it was just causing a lot of data and in 2015 Uber was still working like this. It was now scaled to millions of people and it was just really really inefficient. But then eventually, I mean, it kind of worked and a refactor came and then Uber changed it to be a pull where the so it was pushed so the server was now pushing and it was far more efficient, faster, shorter delay, etc. But in the end, I was like it worked. There are so many examples like that. We were talking about internal tools and and automation uh in the beginning, right? I think when it comes to automation, a lot of automation project overfocus on quality and constraints when actually people want speed and being in control. There is this fascinating article about malleable software, software that leaves a user in charge. Best example being spreadsheets. So many internal tools start with a good Google spreadsheet because there is so much you can do, right? You don't need to ping an engineer to add a table, a colon to your table. You can just modify it. And what's great with spreadsheet is it's the purest. What you see is what you get, right? Like it's everything in front of you. It supports mass change. I would submit that a lot of startup have an incomplete buggy implementation of Google spreadsheets somewhere in their tools because it's so effective. What one interesting story from Uber time the users table and the trips table are add two interesting columns. One was called user and trip tags. So user tags, trip tag, another one which was um a free form list of tags >> and another one was user attributes and trip attributes which was a key value stored as a JSON in this in this column. It was insane when you think about it right and it was massively overused. A lot of team did not wanted to add columns to the trips and users table. So what they did instead is they stick stuff in those colon >> free text pretty much blown >> and then it started exploding I think roughly after I left which is fair but so many products were bootstrapped thanks to that it's absolutely amazing I think so so that's why like those really shitty like internal tools approach they they do work because you get speed out of it and you might validate very quickly that actually you don't need this feature or product in the first place and you're going to save a ton of engineering time which is usually um the limiting factor. So is is it fair fair to say that what you're saying is if you're at a fast growth place instead of you know reading what other fast grow companies did their blogs their best practices maybe you're better off just looking at your most pressing problems solve that and then do the next thing or or like also or as with anything there's a balance here or or just be more more like be skeptical whether you know the company like Uber or now we're going to hear about OpenAI or all these like very successful companies when they say why they were successful maybe that's not the full Yeah, I would say because usually the limiting factor is the engineering time. I would say optimize for leaving the user in control so that they can change things on their own without having to involve engineers too much. Put the guard rails in place so that it does not catastrophically break your break your system but leave the user in control because you will get so much better like product adoption. By user I mean like usually internal operations team, internal business team, your product manager, right? Like lean on this direction rather than having a super constrained product that takes a lot more time to ship and that might not even solve the problem perfectly because you are missing this or that ops context particularly the case when you build a product for the physical world. >> So one of the big changes we're having obviously is is AI AI coding tools AI use everywhere. What do you see? How is AI already changing software engineering especially for the work that that that you're doing at Cloud Kitchens and what are your thoughts on on how it could change as as it just becomes better? >> Yeah, I mean so much has been written every week there's a new article, new analysis, new prediction. Uh we we did our own study. We we found it was moderately useful. It is useful. Um, so the thing I can add is for myself like I I I think it's particularly useful when you have a well-defined problem that you're trying to solve navigating the code. As a coach and as a trainer, as a manager, I use it a lot to review my document. As a matter of principle, I never copy paste text pros that was written by an AI. I want to make sure I stay in in in control of what I write >> because yeah, writing is thinking, right? like it's the same it's the same process. So I don't want to um outsource my thinking to an AI like I would be really really sad if that was the case. >> And and when you're saying with engineering you found moderately positive impact. Is is this to do with uh kind of using autocomplete uh using it to generate some parts of the code? Did you go further in terms of like let's say code reviews or trying to automate some workflows? >> Yeah. So we use it for all of those use case. I mean obviously I don't write as much code as I would like to and as I used to. So um >> well but now we have now you could >> I do actually it's a great point I did a lot more coding lately because it with agent in particular it enables you to multitask you can you have 10 minutes before a meeting you issue a uh specific prompt you have your meeting you come back and then you review the code um yeah I would say coding wise refactor is really good at that um uh refactor well specified APIs any interfaces, anything that is more complex, I feel like it it's still failing. I'm sure it's going to get better, but I think the design component will always be uh a human being thing. Um the the other thing, every single time we have one of those revolution and AI might be the biggest yet like don't get me wrong, but like every time we have those revolution, the press and everyone speaks about like replacing engineers. We're still there. You remember when machine learning really picked up um people were saying yeah yeah so we we won't need software engineers anymore. The truth is we actually it's a data scientist that was really a job role that was under a lot of like challenges because of machine learning and and readym made models. Uh but yeah I don't think >> you're you're for example your team is still hiring correct? >> Yeah yeah we're still hiring. Uh I think it's going to enable engineers to do more uh to focus on more interesting tasks. We talked about migration. I think migration is potentially a great use case. >> We hate doing it. >> Yeah. >> And it needs to be done. >> Yeah. Yeah. So migration great use case. Um so for instance at cloud kitchens we we had a migration and the teams that was owning the internal platform put together a prompt that you could copy paste to do the migration and still review the code because there's always things that that go wrong. Um >> what what about security or or an increase of of would we have an increase of attack surfaces by by using AI? >> Yes, I mean I think that's the most obvious thing as well. Uh security engineer is going to be a great job role. There's going to be so many things lately. The main source of a big source of vulnerability was like the supply chain attacks, right? When you take control of one of the dependencies of a project by typo squatting or by just taking control of the repo for instance and then this way you can get remote code execution on remote virtually thousands of repository. Well, agent are that at scale. So it's it's going to be an incredible time for security engineer. So far so good. There haven't been so many problems, but we're going to have so many. It's absolutely obvious. >> We need to prepare for that. What about engineering management? I mean, you you mentioned how you use it to to to help with some of your documents, but do you think it changes the engine manager's role? It help it makes some things easier. Does it make some things harder? >> Yeah, I use it as a coach. So for instance where I would have potentially pinged my manager for advice uh and and maybe more basic stuff now I get a first review uh from an AI it's hit or miss like sometime it's extremely valid feedback sometime it's totally useless and it's it's a good first pass I think but I would say never copy paste text from AI it's I think you're we're going to lose skills if we do that. Well, one thing I think more people are going to use it. I talk with engineers. One thing you know, engineers, a few things engineers don't like to do that much, meetings, migrations, performance reviews. >> Yes, performance reviews. >> So, so uh if if they listen to this, I have I I have reached out to some of my skip level engineers and I told them, hey, don't don't use AI to generate the feedback for your >> Don't only use AI at least. Yeah, he's gonna find it funny, but there's even one engineer who took some of my Slack messages and then took our competencies and ask the AI to generate my my feedback review. It's it's it's it's unavoidable of happening but but as you say there is a point of writing as as thinking and there is a reason companies try to force a little time to reflect on you know your relationship with this person or or what you think of yourself etc. There's also one thing which is nobody wants to read AI generated text. The moment I tell you that this text was generated by AI, you lose interest immediately. So I would much rather what I tell people who might be tempted to do that is give me the prompt if you don't have time to write good English and stuff which by the way is not that important. Tell me what you would have asked the AI to generate and that's what I'm going to read because this is where the data there is this irony right like we it starts with a very small prompt. it's bloated into a massive text by an AI and then somebody else will use an AI to summarize the uh the content. Why don't we just exchange a pawn directly? Uh we're not going to lose any information that way. >> I feel it's an interesting very interesting contradiction with with with AI specifically with text generation. I wonder if we're going to see more. So we we might see some other things in software engineing slowly repeat as as we learn, right? Because because I I feel this is Yeah, you just want the original information, the prompt. >> Yes, >> we'll see if software might or might not be the thing because people don't really care about the software. You know, users don't care like what exactly code runs under the hood. So maybe it'll be different >> until there is a problem, right? Until there's a problem. That's why the other thing is I think you you talk a lot about that on your blog as well, the importance of code review. The first time I used AI to generate code, I did not realize that it would be my code. So I did not really really proofread it that carefully and then I put the PR up and I got a lot of feedback and I was like, yeah, I agree with all the feedback. This is really bad. And what I did after that is make sure that you read the code that was generated so that it becomes yours because at the end of the day something breaks, you're gonna have to do this anyway. >> And and that's probably the most interesting aspect which is reading code that you did not write and making it your own is much higher cognitive load than writing it in the first place. And as a consequence, that's why you are still going to need engineers because at the end of the day, the the moral agent, the the person making the decision and putting their stamp is still a human being. >> I wonder if an interesting second order effect of AI. It's very good at generating a lot of a lot of text from a prompt. You know, it can generate like pages of text. It's also very good at generating a lot of code based on a prompt. You give it a prompt, it generates the code. As a result, it's pretty obvious we're going to see a lot more code generated. If you think about you know the professional engineering teams that you worked on at Uber at cloud kitchens at the other startups we spend a lot of times engineers to not write verbose code right like on code reviews we will push it down we we don't want to we don't want to copy for example functionality that exists elsewhere we would rather use the abstractions what do you think the impact might be if if we just continue the thought experience that we will have a lot more code generated a lot more wordy code than than need be where will this lead >> I think so startup might fail uh because or slow down, right? >> Yeah. Because if they're only doing VIP coding with very little code review, it's going to become unmaintainable even for an AI. >> So I would say you you have to stay in control. You have to have strong code review practices in place. Yeah. One thing I often see when I ask AI to generate it's trained on Stack Overflow content. It's trained on not necessarily the highest best quality code. >> It's whatever is on GitHub or whatever is open source. I mean some open source is high quality. high quality. Yeah. >> But but a lot of stuff that's open there and can be used for training is not. >> Yeah. So, so for instance, you will see that it's reinventing a feature that you know already exist in the standard library or in this or that library. Very often a failure mode I think. So, so I think this is one of the yeah one of the risk is you end up with so much code that you actually default on the code like being so overwhelming that when something bad happen you cannot debug it yourself. If you have to involve other API that will fail. Um >> yeah, we might relearn some some lessons that we've already learned as an industry, you know, a few decades ago. >> Yes. >> And as closing, what is a programming language that you like most and uh which one would you like to learn? >> Yeah, difficult question because I I love programming languages. Uh I love this uh quote from Jean Stut, the creator of C++, right? There's only two types of programming languages. Those we complain about and those nobody uses. So in the first uh section I would say Python. Um, why? I mean, it's so effective a language, so versatile, a really great, by the way, interview choice. If you have a coding interview, it's a really smart move because it's so effective, right? You can get so much done. It's a professional language now. Might I remember, yeah, remind people that most of Uber was built on top of Python in the beginning. So, it's a really, really powerful language. Um, TypeScript is pretty good also in that in that destination. I usually prefer dynamic languages because they leave me much more in control. I love statically type languages as well. And then learn I learned closure. I never so that's the second I guess category. I never got the opportunity to use it that much. I think it's a beautiful language. Lips lip dialects are really cool. Um love functional programming but sadly I haven't had the the opportunity. Rust is pretty good too. I learned it as well uh in in that section. Yeah. No, great. Charles, this was a really fun conversation. I'm glad we had it. >> Yeah, same. Gary was a long time overdue. >> I hope you enjoyed this candid conversation with Charles. One of the things I like most about him is how he keeps trying out new stuff and is an absolute productivity geek. I hope this came across a conversation as well. For more tips on how to lead projects as a software engineer, check out the deep dives into pragmatic engineer that I rolled in the past where plenty of lessons come from working with Charles. They're linked in the show notes below. If you've enjoyed this podcast, please do subscribe to your favorite podcast platform and on YouTube. A special thank you if you also leave a rating for the show.
Summary
Charles-Axel Dein shares his experience growing from engineer number 20 at Uber to a leadership role, offering practical advice on thriving in high-growth startups, effective engineering practices, and the impact of AI on software development.
Key Points
- Charles-Axel Dein discusses his journey from early engineer at Uber to his current role at Cloud Kitchens, highlighting the challenges and lessons of rapid growth.
- He emphasizes the importance of personal productivity, using tools like flashcards and the Getting Things Done method to stay efficient.
- Dein shares key strategies for success in startups, including extreme ownership, underpromising and overdelivering, and the value of weekly updates.
- He stresses the importance of understanding the full stack and context, not just focusing on one's immediate responsibilities.
- Dein discusses how AI is changing software engineering, particularly in code generation and migration tasks, but warns against over-reliance on AI.
- He highlights the need for strong code reviews and human oversight, even when using AI-generated code.
- Dein shares insights on hiring, including the importance of structured interviews, pairing interviewers, and building strong relationships with recruiters.
- He reflects on the importance of reading and learning, recommending a mix of technical and non-technical books to improve skills.
- Dein discusses the balance between speed and quality in startups, noting that early focus should be on shipping and validating ideas.
- He offers advice on managing imposter syndrome, suggesting it can be a positive force for growth and improvement.
Key Takeaways
- Invest in personal productivity early to maintain efficiency and structure in fast-paced environments.
- Use structured methods for interviews and hiring to ensure consistency and fairness.
- Prioritize understanding the full context of your work and the systems you're building.
- Leverage AI as a tool for assistance, but always maintain human oversight and review of generated code.
- Focus on shipping value and learning from real-world impact rather than perfection.