How I built an Apple Watch workout app using Cursor and Xcode (with zero mobile-app experience)
Scores
You used AI to make getting swole at the gym easier. I am so excited about this. Let's see how you built this. >> I started using the GPT mobile app as like speech to text and I was like, well, if the model can understand when I'm talking to it like it is right now, why can't a workout app do this and then tag the data for me? Make it like a structured data set with analytics. >> Let's talk about how you built this thing cuz I wouldn't even know where to start in terms of the mobile and watch side of things. I was like, "What if I have a Python script that takes these files and it takes a GPT4 out and it just shoots it to me in an Excel." 2 months later, I now have like Apple Watch and an iPhone app. Now, I did jumping jacks 35 reps. It sends it to the phone. So, whatever device you have, log your workouts pretty much with no work. >> It does simplify this experience of going to the gym like the gym bros with their notebooks where they're writing down their reps or in their notes app. This is awesome. Welcome back to How I AI. I'm Clarvo, product leader and AI obsessive here on a mission to help you build better with these new tools. Today we have Terry Lynn, product manager, vibe coder, and AI powered gym bro. He's going to show us how he made a mobile and watch app to track his workouts using cursor, Xcode, and index cards. Let's get to it. This episode is brought to you by Paragon, the fastest way to ship product integrations. Our integrations on your product roadmap. Whether you want to ingest your users files from apps like Google Drive or One Drive, sync data with their CRM or automate tasks like searching Salesforce records, integrations are missionritical for products in SAS. But integrations take months to build and existing solutions aren't good enough. Embedded iPass are great for automation, but break under high volume and unified APIs are limited by their endpoints and data access. With Paragon, developers can ship new integrations in days with purpose-built products to support any use case from high volume ingestion to real-time actions and automations. That's why engineering teams at.com, Pipe Drive, Cinch, and hundreds of other B2B SAS and AI companies use Paragon so they can focus their efforts on core product features, not integrations. Visit useparagon.com/howi ai to see how paragon 2.0 can help you accelerate your products integration roadmap today and get $1,000 off. Terry, I'm super excited to have you on the podcast. Welcome. >> Thank you. Thank you. Uh longtime listener, first- time caller. Well, speaking of first time, we have had a lot of web developers, including myself, on this podcast, but we actually haven't spoken to very many people building for mobile. And I know mobile apps have special challenges technically, you know, just building them. And then I'm super curious to see how folks like you are approaching using AI to build mobile apps. So, before we get into the how, let's see the what. What did you build with AI? So, I built a mobile fitness tracker called Copper's Corner. And essentially, the problem I was trying to solve, maybe I put my PM head on for a sec, was I would go to the gym, I would try to be consistent, and then, you know, life gets busy. You start to slip one or two days, and then you go back a week later, you're like, "Oh, what was I doing?" And so, I tried these fitness apps where you kind of have to like create an account, set up your exercises, tell them what your gym has equipment wise. And I felt like doing a lot of homework. And even when you're working out, you have to like log what you did, you know, your rest timers, things like that. And then last year, I started using the GPT mobile app as like speech to text. And I was like, well, if the model can understand when I'm talking to it like it is right now, why can't like a workout app do this and then kind of like tag the data for me and um make it like a structured data set with analytics? And so that's kind of how I started down that path. And then I basically started with a little script. And then eventually two months later, I now have like a Apple Watch and a iPhone app. So I'll run you through how it works and then we can talk through uh how I built this. >> Okay. So you used AI to make getting swole at the gym easier. I am so excited about this. Uh as someone who definitely never never skips a gym gym day. >> Okay, let's let's see how you built this. >> Okay, cool. I'll run you through how it works real quick just to get us so everyone has like a visual context here. So this is the login screen. I can see it. He's the hero of the image. Does this look good? >> Oh, Cooper. Cooper the dog. >> Yeah. And I don't know if you can see him, but he's sleeping right there, literally like on the side of my screen. >> Cooper. >> Yeah. Okay. So, the first thing is authentication, right? I'm using sign in with Apple. There's kind of a UX decision there because I don't want you to create an email and have to like, you know, enter a code. And so one of the first things you'll notice is when you log into the phone here, it logs into the Apple Watch. So there's some stuff I can talk about there, but essentially you could basically just use voice and record your workout here. And so I'll give this a shot. I did dumbbell shrugs, 35 lb dumbbells, 10 reps. And so what I built is you could record it from your phone or you could do it from the Apple Watcher because sometimes when you're at the gym, you don't always have your phone uh too, right? So let's see if it got it right. There we go. See? And it has the transcript here. If I try it from the phone, right now I did jumping jacks 35 reps. See, it sends it to the phone. So, whatever device you have, it's kind of cool. It can uh >> log your workout pretty much with no work. So for people that are not watching on screen, this is an app that goes across your mobile device and your smartwatch and you can basically speak to it at the gym, say what exercise you did, what weight, what reps, and it just automatically populates a structured record of that exercise, including what time you did it. It shows a picture of the equipment you might have used or um a picture of somebody doing jumping jacks or in this case push-up jumping jacks. And it does simplify this experience of going to the gym, you know, like the gym bros with their notebooks where they're writing down their reps or in their notes app and Apple. So you built this. It's multi-device. It's multimodal. It goes voice to text and text to structured output. Let's talk about how you built this thing because I wouldn't even know where to start in terms of the the mobile and watch side of things. >> Yeah. And there's one more thing to show you real quick. I'll flash it on is we were talking about how do you be consistent. So there's actually like a history view here. So you can look at your 7-day, 30-day, 90-day analytics to see how consistent you are. And you can see in like the last 30 days I've been going to the gym 21 days, right? So now I know when I've been slacking, when I haven't been, and also I can see like my top exercises. You can see I do a lot of jumping jacks here. I do leg presses. And then if you click into the leg press, you have like a whole history of what you've done. So now you have like a scatter plot of where my weight's going. Uh like a history of if I'm actually progressing or not, too. So that's kind of it. I could walk through how I built that, but any questions there? >> No, let's go through how you built it. >> Okay. So one thing with iOS apps is that uh you usually have to use Xcode which is Apple's IDE to actually build the code. So I do something called dual wielding here. So this is what Xcode looks like. And what I do is usually I do a side by side with cursor here. So the way I link this is I make both of them point to the same folder on your computer and then cursor will do the coding uh and then I will do the building and the debugging in Xcode. Uh the reason is because sometimes on the phone when you get errors like compile errors build errors uh Xcode is really where you have to go to get that because cursor it's not like a web browser where you could just kind of look at the tools and then start looking at the console. And so, uh, there's a little bit of a kind of workound you have to do. And I've seen some people online have these hot reload apps, but this is kind of the solution I found that works best for me. And also, if you have to build to the Apple Watch, you kind of have to do it separately. So, that's kind of why I have this split workflow uh, for now. >> Got it. So, we want cursor to somehow build a native integration here. So, you have a little bit more cross-pollination if you could. >> Yeah. Yeah. Yeah. And it's not like web apps where you just run a local host and then you kind of can see it in your browser. You have to kind of always build it on your phone. And right now you're seeing this in the iOS simulator. I would say even when you're testing mobile apps, like swiping around with my mouse here feels very different than when I'm on my phone testing in here. And it's even different when I'm in the gym actually like running around and like trying to record stuff. And sometimes the audio is not great too. So I think when it comes to like mobile apps and you're testing this, you just have to get as close to the actual experience. Otherwise, you're kind of just testing in a box and not really getting the true kind of user experience there. >> So, how did you get this started from zero? Kind of what was your setup in terms of defining the product, getting cursor set up? How did you work through that that workflow and then doing the build um and testing and Xcode on your phone? >> Yeah. So, like we say in product, you want to like start think big, start small, right? So the V1 of this was actually just using Apple voice memo like on the Apple Watch. I would record that and then you would copy it to my computer. And then around February this year, I started getting into vibe coding and I was like, well, what if I have a Python script that takes these files and then takes a GPT4L and then just suits it to me in an Excel. And so what that looks like here is kind of this spreadsheet here. This is like the very V1 of it where you see the transcription. it attempts to tag it with different muscle groups and like rep sets. But the problem here is that this is not structured data, right? It's just whatever the model thinks is the exercise. And so then I was like, all right, well then what is the solution then is then you got to put it into a database where it's actually structured data. You have like the foreign keys. You can actually manipulate it better. It becomes more consistent. So that's kind of where it went from this to a backend API that a built-in cursor. I want to pause here because I actually think this for folks that are listening is a pretty cool hack, which is using voice notes on your on your watch to just narrate your workout as at the gym and then download that text, you know, put it in even chat GBT or some sort of like no code little flow and populate it in a spreadsheet. That's just such a simple and easy way to start. And then what you're saying is you're a PM and an engineer and you want foreign keys and a database. So you decided let's make this a full app. >> Yeah. And it's also if you look at the spreadsheet, how do you make sense of this? Right? Over time you're not really you don't have these analytics that you see here on the right side in the phone. And so I was like well how do you then do that? You can ideate with GPT. I think when you work with models they could either work with you or work for you. This is very much in that working with you side or just ideulating on features and how you can potentially make like an MVP into like a low hacky tool that gives higher iteration. >> So okay so you decided to build this app. How are you actually building? What is your step-by-step workflow in cursor? >> Sure. So I have kind of a three or four-step process in cursor. So first step is creating a PRD. Second is having a model review it. And then third step is executing. And so the way I have that set that up is kind of three rules here. There you see prd create prd execute. Uh and so if we take a file here uh as an example here I have an existing ticket. What I'll do is I'll take uh an issue. I'll add it to cursor here and then I'll have it run through prd create. And what this rule does is essentially breaks it out into what you need to implementation. So any reference diagrams to actually do this task. Uh what are the goals we're trying to do? Uh one thing I do is I use Girkin user stories to actually describe the scenario. So the format is like given something happens when the user does this uh then do this action. And so there's also some investigation here that happens. So if I don't know how to actually do something where the model doesn't know what files to use, what databases to touch, that's kind of the checkpoint here. And then this is kind of like the V1 of the PRD where it's not fully fleshed out, but at least it's the structure of it. And then one thing I realized is sometimes early on when I was vibe coding, this document would not be enough. And that's kind of why I created a PRD review rule where it basically uh does like a check on that PRD. So, one question I do I ask is if another model were to take this plan, uh, how would you rate this out of 10 if they had no context and it had to execute on this? And so, you're basically sanity checking your PRD to see if it's actually has any gaps that a model could trip up on down the line. >> You've seen the doom and gloom headlines. AI is coming for your job. But the reality is a little bit brighter. In Miro's latest survey, 76% of people say AI can boost their work. It's just that 54% still don't know when to use it. As a product leader and a solo founder, I live or die by how fast I can turn fuzzy ideas into crisp value propositions, road maps, and launch plans. That's why I love Miro's innovation workspace. It drops an AI co-pilot inside the canvas so stickies, screenshots, and brainstorm bullets can become usable diagrams, product briefs, and even prototypes in minutes. Your team can dive in, riff, and iterate. And because the board feels like a digital playground, everyone has fun while you cut cycle time by a third. Miro lets humans and AI play to their strengths so that great ideas ship faster and happier. Help your teams get great done with Miro. Check out miro.com to find out how. That's mirro.com. So, how did you build these cursor rules and how did you know what to build and how how to put stuff in this template? >> Honestly, a lot of blood, sweat, and tears. I think when I first started vibe coding in February or March, no one was really talking about memory banks or rules yet. And so, the problem I was running into was every time I modeled it got to do more and more complex tasks, it would at some point trip up over itself. It would make up stuff. It would make up file directories. You would get endpoints wrong. And so I realized after repeating it, telling it what to do time and time again, I started noticing these patterns and like how to give it context. And I was like, why am I repeating this over and over again? I should just make these into the rules. And so that's kind of how I ended up with these processes. And you notice I also broke them into like three PRD rules. The reason is because originally my rules were super verbose, maybe 800 lines long. And I realized I was just running into a lot of context windows with these same rules. So you'll see like they're now no more than 200 lines long. And so I actually like I'm much more cognizant of how many tokens I'm using with these roles. And so over time I kind of gotten more efficient with that uh as I get more experienced with cursor. >> And the second part of this which I think is so interesting and mirrors how things work inside product organizations is you write a PRD and then somebody looks at it. Somebody reviews it and says is this good? And that person could be you know a PM lead. It could be a PR PM or it could be an engineer. And it seems like you're taking this from an engineering point of view and saying like as a model reviewing this zero context, what do you think? And I like this idea of rating 0 to 10. Do you feel like the ratings are generally pretty accurate? >> Yeah. And it's more of a barometer for me and how well I prepared it. If it's like a seven out of 10, I'll then ask, well, what are the three points? Why did you dock it? And it'll give me some reasons on why those gaps are. But is this a edge case? No. We could ignore that and it'll kind of you want to get it to at least to like a nine out of 10 before you do it. And the reason is then when you have it run through the plan, it could probably oneshot it pretty quickly. >> Mhm. >> And so I'm doing that here on the side. You see there's a seven out of 10 here. And so it gives me like a straw man and steelman of it. And so you see the gaps identified. I can then be like, "All right, inconsistent API like are these actually things we need?" And then I'll kind of then kind of iterate this until it's a 10 out of 10. And then that's when I then run the execute rule, which I think you've seen is like the checkbox rule. You have it in chunks. Uh something I do extra is I have a get commit before and after each phase and ask it to pause. So I don't just have it oneshot everything. But uh this is when I know I can let it run, go get a coffee, and then come back and just do some QA. >> Got it. So your step is write it as a PRD. Gosh, you're poor AI PM. You write the PRD, then someone else tells it if it's good or bad. Then then you iterate. >> Then the poor PM has to make epics and tasks. The AIPM has to make the checklist. And then you're giving very explicit directions on check in at every single point. And then you say, "Here you go. Here's your PRD. Here's your checklist. Let it run." Um, and then are you coming back and checking that code, doing the build in Xcode, and then verifying it all works and shipping it? >> Yeah. So, here's an example of one um where on the Apple Watch when I showed you I hit record and I stopped it. It was saying send to iPhone, but it wasn't staying on long enough. So, if you're recording this, you may miss it. And then you may wonder, hey, did it actually even go to the phone? And so, this was adding like a little timer so you can see that if you're kind of walking around the gym and there's like a feedback that, hey, this is we got this. You don't have to worry about it. You don't have to worry about it failing. And so, this is what the PRD looks like. I logged it originally in a tool called linear and so I pulled that in here and then I ran it through prdreate and so it created this doc and so I'm very explicit where I even tell it what files you want to touch just because I don't want it to search through my codebase waste tokens like hey here's what you need to do here are the files here are the endpoints just go run with it yeah so here's some example of the phase checklist you see phase one it does the checklist and every phase it kind of pauses right there's like a safety checklist here right no placeholder codes we don't want you to make stuff that could break things. There's some error handling and then it kind of runs through this. >> Yeah. And I want to pause for folks that are not watching on that checklist if you could go back to it. It's really interesting. Every phase at the end of the phase says you have to verify the paths. Um make sure you're using real data, no placeholder, that you're handling errors well. And so it's really interesting that you really again are following this pretty classic development pattern of like product manager you know defines requirements breaks down the tasks engineer executes you do a QA phase and then you move on to the next task. So you sort of um structured your rules and your process to map to that and then the only difference is you're using an AIPM and an AI engineer and AIM to sort of run QA uh to to run through all this. >> Yeah. And I think 80% of the time I'm working with the model. It's this last 20% >> where the checklist is just burning through tasks uh for me too. So I think the biggest switch is probably just learning to work with the model. I think is where if you get good at that I think after this stuff is just kind of table stakes it's just executing uh for >> and I have to I I I'd love to understand you've mentioned conserving tokens for a while and I have to understand your motivation because I am just like eat all the tokens you want I just do not care tokens everyone you get a token you get a token um is this a cost sensitivity thing is it a performance of the tasks where you know too much context in in the the cursor context window actually degrades performance. What is your motivation for token conservation? >> Probably my sanity if I put a thought to that. So the reason I say sanity is because uh LLMs are really good at just generating a bunch of code it could be very robust. And so sometimes when you have very large files I noticed cursor when they go through your codebase files they read in like 200 line chunks at a time. And what I realized over time was that I had I had these files that were like a thousand 1,500 lines long and it was spending a lot of time just going through that. And then as it was going through tasks, it would just start getting tripped up a lot. Then that's when I kind of stumbled upon this concept of like I call it vibe refactoring where you basically have to take vibe coding. You have to like reorganize it into something that's cleaner. And then that way I guess it's like good engineering principles. They do that to right? You refactor code, you keep it clean. So that's something I do with another rule here where I kind of have the PRD rule, but it's not meant to create features. It meant it's meant to refactor existing code bases. And so I essentially give it the same, you know, context docs. I give it the API endpoints. And I give it different guidance on hey, what do we need factoring? Why should we do this? How do we make sure it doesn't break? Like how will we ceate this after it's done? And it kind of runs through the same process of where you do that PRD planning, but your goal is very different. more of a cleaning your apartment, making sure everything works fine kind of. >> Okay, so I have to call this out for all the engineers, engineering leaders, large engineering organizations uh that are watching the podcast. Yes, AI does have the ability to generate lots of code and maybe not the highest quality code always um in pursuit of shipping features. And I know there's a lot of nervousness in the industry about oh my god, what if we give PMS access to ship code or designers access to ship code? The code's going to be so ugly. They're not going to know. It's going to cause a lot of tech debt. But what I I have experienced is that exactly this is actually quite good at refactoring. And I almost suggest to engineers that are adopting these tools or teams that are adopting tools is like build the refactoring as a known cost of your AI implementation. almost planned it that way where it's like, okay, I'm going to I'm going to vibe code to the feature working so I can check that customers like it, that I can see that it works, that I can see if I like the design, all that kind of stuff. Get it out there, you know, like secure, perform it, but not beautiful code, and then go back and spend a vibe day refactoring. And I love that you actually have built I people would love to see these rules. So maybe we'll bribe you to share them. But um I love that you've built this rule of like this is how a good refactor goes. Here are my engineering principles that I want you to prioritize. Here's the things that you should think about. Here are the performance things you should think about. I think it's just a really nice process. And I've had this experience myself where I just I build a feature. I get it out. People like it. Great. I wasn't happy with the code but I knew that. and I go back and refactor and it's so much cheaper and faster honestly than almost any other path to like the good code. Um, so I think this is an exceptional way to do things. >> Yeah. And I have an example on my screen here of what that plan would look like for a refactor. So the quick way quickest way to do that is just to do a line count and have the model do it. Hey, what are my biggest files? Which ones are like god mode that we need to factor down? So one of these is this recorder file we need to go into. th000 that recording logic I just showed you there's 900 lines I generally try to keep them under two to 400 lines just because I don't want the model to use context going over code maybe it doesn't need to do a task right and same thing here it has that checklist right each phase and then yeah basically each phase it'll try to do a build the next code and then make sure there's no compile errors and it'll just keep chugging through that and we have that safety checklist uh here too >> and let's make this clear for uh folks that that are listening is you very intentionally set up one of the design principles of your codebase that you want your files to be of a a small enough size that the LLMs itself have an easy time working with and navigating individual files getting tasks done. And I think this is really interesting because I think about it a lot. I think as a human, what is my preferred structure of a file? Do I like embedded functions? Do I like a bunch of little bitty files across the the codebase? What is my preference? And sometimes I like these longer files because I'm in a file and I'm like, well, I want the definition of this here. I just want to be able to read and understand what this this is doing. But a AI engineer reads the files very differently. And so what I like about your approach here is you're optimizing for your AI colleague and the way that they read this context. And it may or may not be the same as an engineer or human may want to read and retain the context. And so I think that's going to be a real challenge for teams that are putting this kind of flow into production is making those design principles work for all systems that are contributing code. >> Yeah. And one thing that's useful if you make it smaller files is I have this final rule called a rubber duck. So in engineering, there's a concept where if you're trying to work through an issue, you talk to an inanimate object, you explain the code line by line. And so what I use this file for is if I'm just trying to learn or optimize our rate of learning, like we just generate a bunch of code, I don't know what some of it does. If I want to just learn what it does, can you just take this file and explain it to me? And sometimes I'll have dinner, you know, I'll just ask it to like pop quiz me and I'll just like eat it'll give me a function. I'll think about it and then I'll kind of use this to rubber duck as a partner um too. And so yeah, it's >> this is a this is a great idea because I do this a lot once when I have let cursor, you know, run through a big chunk of code is I just >> I summarize it with like explain to me what you did and how you did it and why you did it. We actually put those really clear explanations into our pull request descriptions so it's very clear how something was built. But you're getting the A+ colleague award by actually quizzing yourself on how your own app works. >> And I think a lot of people are worried that um you know vibe coders or whatever are going to build these apps and have zero idea how they actually work, whether they're secure and honestly they'll have zero idea um of the underlying technologies and won't actually develop technical skills that can make them an overall better builder. And this is a really good strategy kind of built in to use your vibe coding process as a learning tool. Um, which I I think is is the best way to do things. I have um been a a self, you know, a self-taught software engineer. I've always had to learn by like tapping a colleague on his shoulder and saying like, "What do you think of this code? How did you build this? Why did you make this decision? Why did you select this library?" All that kind of stuff. And now you just have this like infinitely patient colleague to to do that with. And I just think it's such an amazing learning opportunity for folks that want to develop technical skills, want to be able to code themselves or in partnership with AI. So I think this is a really awesome process and I would I I definitely want this these rubber rubber duck rules. >> Yeah. And I think rubber ducking vibe cutting is essentially like a reverse rubber ducking, right? like we're telling the LLM what we want, but then it's spitting out the code for you, right? And I think building this muscle actually helps you build faster over time because you're learning how to debug stuff with LLM as your tutor. As you go through this process, you can start to see when it starts to mix mistakes and you can get through them faster just by going through this process and learning through your codebase. And kind of the rate of learning is what you're trying to optimize for uh with this rule. >> I always think this is so interesting. All of you organized vibe coders out there. Ryan who was on our podcast has a somewhat similar, slightly different flow. Yours is a little bit more technically oriented. I still just like to float through the ether eating as many tokens as possible, letting the LLM take me where it will. Maybe I will bring some more of this structure. I'm just really good at writing PRDs, you know. Um, and so I start from there and then just let it let it rip. Um but but I love this process. Okay. I want you to show us one more thing which is a little bit more on the design and exploration side which is how you combine um you know offline drawing and online to build little mobile prototypes and try out ideas. >> Yeah. So with mobile the funding is I prototype this using index cards here. So I'll show you what that looks like. Uh, so sometimes in New York when you're in the subway, you run into these dead spots where you don't have reception. And so one thing I started doing is I started using these little index cards here where I would just draw out the UX and I can like you can kind of like press something. You can change it to kind of simulate that uh kind of UX. And then what you can do is you can send this picture into GPT4 on your mobile device and just tell it, hey, this is a mockup. Can you help me upscale this? And so here's kind of what it looked like here as a scribble, right? And then you can start getting variations asking it to change different layouts and then eventually you can put this into a tool. I use a tool called UX pilot which can actually spit out uh Figma components and then when you drag it into Figma uh there's Apple has some libraries you can use. So this is what the UI kit looks like you can do like segmentation bar. They have different names for their different things on the iPhone. And you see this is and then you just click insert and you can just start building like a iOS app very quickly with kind of the default Apple layout too. So that's how you can quickly kind of spin up a design. >> Okay. So you are prototyping a mobile app on your mobile app sometimes with no connection by using an index card which is a perfect aspect ratio for prototyping a mobile app. You take those, you upload them to chat GPT use for Do you use the 40 image generator? Is that what you're using to get the um prototype? >> Yep. And I think this was around the time when they released the image in it just got really good. And so that's when I realized, oh, it could take a lowfi thing and upscale it for me very quickly. >> Cool. And then you drag it into Figma and then use some of the out of the box components. I have to ask, have you ever just uploaded th those um index cards directly to cursor to see what it it does with them? >> I have tried, but the results aren't great because I think it's like an input output equation. And if you give it a lowfi, I think it just gives you something that's pretty sloppy. Um yeah, I think everyone knows that you the more hi-fi you get the better quality you can get. And so that's kind of where I'm really struggling now is like getting the design like I think getting the design 80% of what you want is pretty easy but that last 10 1% is like really really hard to fiddle with LM. >> This is what I tell designers all the time is I say your value ad is not getting the forms on the on the website there. It's not getting the buttons in the app. It is that last 10% that really differentiates an app. And I still think there's this great place for human creativity and craft and innovation in that space. And so I hope the designers that are listening um you know look at this app and they go I know what the 10% is and then they they bring that to their their companies or their projects. >> Yeah. It's like just looking at this app there's things that annoy me and I'm like oh this is like the last 10% but it works. I should do other things because just got to keep shipping right as one person. So, >> oh my gosh, you're giving us PMS and our love for MVPs a bad name because you know those engineers and designers always want to finish that last 10%. >> Yeah. Yeah. I think as a as a PM when you do these side projects, I think it also makes you communicate better with your engineers and designers just because you know what they go through and like when you go back and bring this context back, you can just have much better conversations u by doing these side projects too. >> Great. I I completely agree. Well, Terry, this has been so fun. We are going to do a couple lightning round questions and then get back to your dumbbell shrugs. Um, my first question is, you know, the thing that I was struck with at the beginning of this podcast is the sort of like Xcode cursor rebuild flow. And so if you could make an ask out there on behalf of all the mobile app developers to anyone thinking about AI software coding tools like cursor or any of the agentic workflows. What's your ask? >> Anything that lets me know what's going on in the mobile. Uh one example is there's no network tab in Xcode. So you don't actually know what traffic is going in. So it makes it hard to debug. I have to then do print statements which gets very old school and annoying to do. Got it. So just basic quality of life improvements for mobile mobile engineers. Do you feel like the LL LLM models are pretty good at you know mobile languages? Do you feel like you're getting high quality code output when you're developing for mobile? I've heard a lot from users that you know these LLMs are really well trained on some ecosystems and languages and less so on others. Have you had any challenges there? You feel like they're pretty good? >> I haven't had issues yet. I think where I have issues is if it uses a older library or kind of way to describe the language, but other than that, it's been okay because I think these tools now can access the docs. You could use websites. So, you can kind of figure it out from there. And I think they'll just get smarter over time. So, I haven't seen that be an issue right now yet. >> Great. Okay. And then my last question. You're so organized. So, maybe you never have this problem unlike me who is very disorganized, but when the LLM is not listening, when it's making 900 line uh code files instead of 200 line code files. What is your tactic for getting an LLM back on the right track? >> Uh let's see. I will use a lot of git commits and then that's my fallback basically. I'm almost overly committed to that. >> So you are just breadcrumbming along the way each change. So you do risk mitigation as opposed to redirection. So you're like every little change you're doing a get commit. So if it ever gets off track, you can just go back and reset. >> Yep. Every three tasks there is a get commit before after. And that's how I know I can let it rip because I have that risk mitigated, right? >> Oh, come on. You got to live. You got to do like a 15 file plus 2500 lines minus 74 line commit. You got to live, my friend. >> Yeah. >> Uh >> see, here's where we can show we're just very different people. red, blue, fire, ice, you know, >> hot uh AI coding, yolo, very controlled, get commit uh coder. Okay. Well, this is very fun, Terry. Thank you for showing us your flow. I think this is really useful for anyone um coding in these tools, for PMs looking how to get started or apply their process, and then for people stuck on the subway thinking about how they can make their next app, uh where can we find you and how can we be helpful? Yeah, you can find me on LinkedIn or on X. Uh, it's me, Terry Lynn. >> Amazing. Thanks so much. Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube, or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiipod.com. See you next time.
Summary
Terry Lynn, a product manager with no mobile app experience, built a cross-device Apple Watch and iPhone workout tracking app using Cursor, Xcode, and AI tools. He used a structured workflow of PRD creation, model review, and phased execution to build a voice-to-structured-data app that simplifies gym logging and analytics.
Key Points
- Terry built a mobile and watch app called Copper's Corner to track workouts using voice input and AI.
- The app transcribes spoken workouts, structures data, and provides analytics across devices.
- He used Cursor for AI-assisted coding, Xcode for building and debugging, and a dual-IDE workflow.
- His process involves creating PRDs, having the model review them, and executing in phased checklists.
- He optimized for LLM context by keeping files small and using a 'rubber duck' rule to understand code.
- He prototyped designs using index cards and GPT-4 image generation before moving to Figma.
- Token efficiency and risk mitigation are key, achieved through git commits after each phase.
- He found LLMs capable of writing quality mobile code when provided proper context and constraints.
- The approach balances AI speed with human oversight for quality and learning.
Key Takeaways
- Use a structured workflow (PRD → Review → Execute) with AI to build complex apps efficiently.
- Optimize your codebase for AI by keeping files small and using explicit checklists for each phase.
- Leverage AI for prototyping with low-fi sketches and image generation before high-fidelity design.
- Implement risk mitigation like git commits after each task to maintain control over AI output.
- Use AI not just to build features, but to learn and debug your own code through a 'rubber duck' process.