The secret to better AI prototypes: Why Tinder's CPO starts with JSON, not design | Ravi Mehta

howiaipodcast _yQMGHHl49g Watch on YouTube Published September 28, 2025
Scored
Duration
54:39
Views
13,029
Likes
259

Scores

Composite
0.52
Freshness
0.00
Quality
0.85
Relevance
1.00
10,503 words Language: en Auto-generated

PMs and designers are prompting prototyping systems that they don't quite understand how to get the best outcomes from. I'm always impressed that a prototype gets generated, but sometimes it's just like not quite what I need for the product I'm building or the experience I'm trying to craft. And so I know you have come up with this system called datadriven prototyping which you're going to show us. >> The thing that we can do is we can help the LLM by starting to separate out the idea of not just generating the UI but also by helping it with the data. So I've got a prompt here. says using JSON because we want it to be structured data. Generate a sample itinerary that I can use to prototype a shared trip itinerary feature. The destination is Paris. >> I just think about the human parallel to this which is searching through stock photos trying to find which one is representative. It just takes so much time and because an MCP now can like programmatically go through the tasks to be done using these external tools, it just makes it a lot faster to get higher quality media into your prototypes. >> So this is the finished prototype based on that prompt. We can see it generated 22 different files. So, really nice componentization. It's got a little bit of sample data in there and it generated mock data. So, we can see what day one looks like. We've got some photos in there. We can see what day two looks like. This will be you teaching me how to actually bring some data and structure to my Vibe designing and prototyping. This is genius. I'm really excited. Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive here on a mission to help you build better with these new tools. Today I am giving you elite prompting strategies from Ravi Meta, who is CPO at Tinder and a product leader at places like Facebook and Trip Adviser. Robbie's going to show us how design systems and UX descriptions are not the foundation of great prototyping. In fact, JSON and data models should be. He'll also walk us through how to use structured prompting in midjourney to get high quality photos and images for your prototypes. Let's get to it. This podcast is supported by Google. Hey everyone, Shishta here from Google DeepMind. The Gemini 2.5 family of models is now generally available. 2.5 Pro, our most advanced model, is great for reasoning over complex tasks. 2.5 flash finds the sweet spot between performance and price and 2.5 flash light is ideal for low latency high volume tasks. Start building in Google AI studio at ai.dev. >> Hey Ravi, thanks for coming on how I AI. I'm excited to see some of these workflows that are going to be really useful for me. >> Thanks so much for having me. I'm excited to go through it too. I've been having a ton of fun playing with these things. >> Yeah. So we've seen seen a lot of engineers lean into vibe coding and some of the pros and cons of that and what I am also seeing which I think you are probably seeing is product managers and designers doing like vibe prototyping where you know if we're saying people are writing code they don't understand. I might I might argue that PMs and designers are prompting prototyping systems that they don't quite understand how to get the best outcomes from. And I think these are such cool tools for product managers and designers and other folks to get their ideas across, but a lot of times I've been personally dissatisfied with the outcomes of my prototypes. I'm always impressed that a prototype gets generated, but sometimes it's just like not quite what I need for the product I'm building or the experience I'm trying to craft. And so I know you have come up with this system called datadriven prototyping which you're going to show us that's going to help us close that gap between using sort of like vibe based price vibe based prompting into these prototyping tools into something a little bit more structured that you think gets better quality. >> Yeah, absolutely. And I've been playing a lot a lot with prototyping both for people that are doing 0ero to one as well as people that are using prototyping to understand established products. And when you're using prototyping to understand and to advance established products, the game is a little bit different because you have existing UI um that you have to adhere to. And then you have existing data and functionality that you have to adhere to. And often times the thing that's really important is how do you provide the right context to the vibe coding tool? And I think that there's two common ways that people do that. The first is like spec driven prototyping where you write a really detailed prompt. you try to make that as specific as possible to give the tool enough context to create the thing that you need. The other way that people, you know, create uh prototypes is with design-driven prototyping where you actually start with images. It might start with wireframes. You might start with Figma designs. You upload them in and then the prototyping tool takes those and they bring those to life. Um, but it occurred to me when we're building products, a common thing in the product life cycle between design and specification is when engineering starts to take a look at what you want to build, one of the first things that they do is they say, "Here's the data schema that's actually going to drive the front end." And by doing that, they take um some of the things that are a little bit ambiguous around designs or specs and they codify them in a really concrete way. And so I started to play around with can you do that from a prototyping standpoint? And I want to show um how you can use this technique to create prototypes that are um a lot more functional um and they're a lot more flexible so that you can um change them to test different data sets for different purposes so you can get really good accurate user feedback. So why don't we start? We'll um take a look at how someone might uh typically use more of a spec driven um type of approach to prototyping. I'm using uh Reforge Build, which is a new prototyping tool specifically designed for product teams that are working with established products. Um it's been really good. I think one of the things I've noticed about it is it just generates very clean code that's much more usable um in terms of taking things into production. So here I've got a short uh prompt. Uh so make a website for planning a Paris trip with multiple people. Include some activities, hotels, and restaurants over 3 days. Add user profiles and let people comment on things. make it look nice. So I was thinking about when I was at Trip Adviser, we wanted to create a trip itinerary planning feature, but what we figured out as we were doing the spec for that is trip planning is often a multiplayer activity, not a single player activity. So we wanted to understand like how do you create a trip itinerary feature that is really multiplayer from the start and I thought, you know, prototyping is a great way to explore that idea. So here's how you might typically start to build a prototype around that idea and explore it. >> Yeah. And I have to say, make it look nice is a commonly used Clarvo prompt when going into these AI tools like make it good, very sophisticated prompting prompting techniques. So, I think this is, you know, ripped from the headlines sort of uh vibe prototyping prompting right here. >> It's funny because it does work. I find that AI is often very responsive to a little bit of pushing and prod. >> Yeah. Okay, great. So you're using this sort of very common um experience-based description of what you want to see, right? Very functional. It's a website. It has a certain set of you know pieces of data in it. There's user profiles and people can comment like very functionally descriptive. >> Definitely. And so like you know what's pretty amazing is that this will work. you know, it's not a lot of context, >> but if I hit create, um, we'll go in and the tool will create a plan. The the Reforge build tool does a really nice job of asking you, um, some follow-up questions. And you're actually asking them to do multiple things all at the same time. You're asking them to think about the UX design. You're asking them to think about the underlying data structure. You're asking them to figure out the code architecture. So, there's a lot that needs to be done. And you know at the end of the day uh it's you know it's capable of doing it but because you're asking it to do so many different things the output um is kind of an average across those things rather than really spiking each of those areas which you want it to be. You know as a product team you have someone who's great at design you have someone who's great at product. You have someone who's great at engineering. Thinking about all of those things. One of the things that I like a lot about the Reforge tool is it asks you follow-up questions to help make your prompt better, which is important for making sure that the system has as much context as it needs. I'll just skip that for now. So, it gets into the code generation. And right now, it's going in and it's starting to write the code. It's starting to come up with a componentized plan. So, you know, it's going to use reusable UI primitives. It knows sort of overall what it's trying to do, which is create a full prototype with mock in-memory state. And so it's actually doing the work to say, okay, we need a data model here. And as part of doing this, it will create that underlying data model as well. And you can see here it's creating a file. It's called libmach.ts. But at the end of the day, because it's trying to do so many different things at once, the end result typically is not as high quality as the approach that I'm going to show. So um, let me actually cut over to this particular prompt. So make a website for planning a paris trip already built so we can kind of see what that looks like. >> Great. And one of the things that I want to reflect on while you're pulling that up is I think the exposure of the reasoning is really interesting because I was reflecting on your prompt and I was like what if I slapped this to a designer? What if I like make a website with a three-day itinerary for Paris with multiple people in comments? I would get I probably wouldn't even get back a list of questions. I would get yeah find time on my calendar so we can talk about this totally and it's so funny that you can do that iterative process of taking a very highle idea actually getting back some structured questions to help the sort of design side be and the product side be flushed out and then it can go into okay what would an engineer think about implementing this in like two or three responses and so I just I I I reflect on it as a uh accelerator ated version of the product development process we all know and love. Um, and it's just so interesting that it follows almost the same pattern but much much faster, much more efficient. >> I mean, it's pretty amazing with that little context that you can get to what we have here, but we could definitely do do better. So, this is >> the finished prototype based on that prompt. We can see it generated 22 different files. So, really nice componentization. Um, it's got a little bit of sample data in there and it generated mock data. So, we can see what day one looks like. Um, we've got some photos in there. We can see what day two looks like, but it's also got some problems. So, for example, send river cruise. It tried to get a photo, but now that's failing. And that is actually a hallucination problem. A lot of times when these tools are trying to create media, they do know some URLs that are out there, but they'll hallucinate other URLs and then you'll get broken links like this. >> I want to call out another one on day one. If you go to day one, this looks like a hotel in French Polynesia, but not in Paris. >> Yeah, it's a really good point. Yeah, this is definitely not the right photo. I hadn't even noticed that. >> Yeah. So, I'm like that I mean I want to go there. It looks like a great hotel, but it does not look like a Parisian Parisian hotel. >> This is also not the best uh photo of the Eiffel Tower. >> No, I mean it is the one of the more important parts of the Eiffel Tower. >> It is an important part. Yes. >> Okay. So, we're we're looking at this. It's I mean in 30 seconds very impressive right as I said like we're impressed that these prototypes can be generated at all and what I think you're calling out is it's good not great like if a designer brought this to you'd be like let's let's just go back to the to the Figma uh board and and try something else. >> Absolutely. And it's going to take a lot of back and forth to get it to the level that you want. But the thing that we can do is we can help the LLM by starting to separate out the idea of not just generating the UI but also by helping it with the data. And so the idea behind this approach is that rather than starting with a prompt like we have here, let's start with a data set. Um, and so I'll go over to Claude and I'll ask it to generate some data. So I've got a prompt here. It says using JSON because we want it to be structured data. generate a sample itinerary that I can use to prototype a shared trip itinerary feature. The destination is Paris. The itinerary should include an itinerary name, cover photo, and date range covering three days. There should be three to four travelers associated with the itinerary. Each traveler should include a first name, last name, avatar photo, and preferred travel style like foodie or history buff. For each day, include a collection of things to see on that day. There should be 12 to 15 items in total. The items should uh be a hotel for day one, popular things to see on each subsequent day. Each item should include a name, a start time, a duration, a star rating, number of reviews, tags to describe them, a photo, and a short description. Some items should have notes for one or more travelers. The notes should be in chronological order and respond to each other, like a message thread for each itinerary item. And so here we have like essentially a data schema specification. Um, and it's a lot more detailed than what we put in up front. So we're cheating a little bit, but oftent times teams that are working on existing products will already know what their data schema is. And so you'll have a head start not just with the schema, but also with this with the data. >> Well, and I'll also say that the cheat code here, even though you're really are describing a pretty detailed data schema, you're not talking about relationships. You're not talking about, you know, parent child relationship, foreign keys, none of that stuff. You're really just describing the the components of the data structure in natural language, which probably took you a minute or two to type out, or if you're using voice, you know, even less than that. And so it does allow you to get a lot more detail that will eventually be more structured without having to force yourself to write a data model. >> Absolutely. And this is a nice way to kind of think about the feature, like what are the key things I'd want to see in the itinerary? what are some of the key things about the travelers? And so just thinking about it from a data first perspective helped me understand the feature >> a little bit better. >> Another really important part of the data is media. So avatar photos and photos that are, you know, actually of the Eiffel Tower, actually of the hotel that you're planning to stay at. And so um for this particular prompt, I've added in the unsplashed MCP server. Uh and so uh MCP servers are a great way for cloud to be able to access external services. The Unsplash MCP server can take in a particular query like Eiffel Tower or a particular hotel name and then pass back a photo that matches that query so that when we generate the mock data, we're actually getting real URLs from Unsplash. And so if I go in here, I look at my tools. One of the tools that's active here is the Unsplash MCP server. It's pretty easy to install. There's a tool called Smithy which makes it um easy to get up and running. And this was a key unlock for me is like how do you actually pull in actual photo data versus some of the hallucinated URLs that you'll often get >> or do what I do which is go sit on the Unsplash site and search through and find ones that I like and all that kind of stuff. So I did not know this MCP existed and it will definitely shortcut a lot of my kind of prototyping workflow. So I'm excited to see how this works. Okay, so you've created this prompt. Um, I must call out for the the the power users of Claude. I bet you could create a Claude project to create prompts like this for data models. So, there's probably a whole meta cycle you can do here to make even this a little bit more efficient. But let's show what this this generates. So, you put this into Claude, you connect the MCP, and then you generate something. >> Yeah, absolutely. So, let's start it up. >> Okay. And is there any reason you chose Claude over any other tool? any particular affection for the model, the app, any of that stuff, or just it's the one you reach for? >> I do love Cloud. I find it's pretty consistent. I use Chat Gypsy and Claude um sort of a 50/50 split. >> Yep. >> Uh during the day, but anytime I want to generate data that feels like human and authentic, I find myself going to Claude. And I wanted to go to Claude here because um particularly because of the conversations between the travelers, I thought Claude would do a nicer job generating those. >> Great. And what I want to call out here for folks that are maybe not watching on video and are listening is Claude is starting to generate this comprehensive JSON and then calling the search photo tools in the um Unsplash MCP over and over again to generate a Paris cover photo, avatar photos, attractions, I'm sure hotels and restaurants. And so it's super I just think about again I think about the human parallel to this which is searching through stock photos trying to find which one is representative. It just takes so much time and because an MCP now can like programmatically go through the tasks to be done using these external tools. Um it just makes it a lot faster to get higher quality media into your prototypes. >> Absolutely. And you know this will take a long time. You know it's interesting with AI. I think certain things are moving a lot faster with AI, other things aren't. Um, you know, it's much faster for me to create a document these days than it is to create a presentation in a lot of cases. Um, and here just, you know, manually going through the search the photos is really challenging, but it went it got all those photos and now it's generating the JSON. So, it's took it's taken that um natural language prompt that we had said, okay, here's the data schema and I'm going to fill out the data schema with pretty authentic information. And I want to do a call back to your first prototype, which is I'm sure this or a version of this is what's populating your first prototype, but it was one, as you said, one of many jobs that the prototyping tool had to do. It not only had to think through user experience, technical implementation, writing the code. It also had to go okay and what data goes into this code and what images. And I do think the idea of taking sort of critical parts of that workflow and giving a dedicated sort of prompt and tool to those critical parts and taking that job off the sort of general um building can ultimately end up in higher quality or at least I'm guessing that's our our hypothesis here. >> I think so. And I think that's the fundamental concept behind agents is you want, you know, individual agents with individual contexts working together in sequence to get to an output rather than trying to do it all in one go. Um, and here Claude has done a really good really nice job with this prompt. We now have an incredibly detailed data set. We've got um the travelers that are here, the messages that they're sharing back and forth, the items that they want to see like mamar, you know, the type of item. It's tagged with um different relevant tags that are fun and interesting. There's URLs for images. Um and now we have actually a much more detailed spec, you know, in the form of data for this trip itinerary platform. And so now we can actually just copy this JSON. Go back to our build tool. And now our prompt doesn't even need to be very detailed. And so we could do something like generate a trip itinerary feature based on the sample data below. Paste it in. So we have all that sample data here and then hit create. And now there's a ton of really interesting very specific context that's available to the prototype. And what I found was interesting about this pro approach is when you provide data in this way um the AI doesn't get uh fuzzy with it. Actually will just take the data and use it as is and then build the rest of the experience around the data. >> You just gave me an idea. So, this is an impromptu how I AI idea, but there are so many um SQL generation and data schema exploration MCPs. And I was just thinking as I'm prototyping apps, I should just hit our production database and come up with sample like example um JSON that represents actually the real data that somebody would, you know, use in some of our features and then use some of that to prototype it. And so there is, you know, we're showing a completely fictionalized set of of data, but you could I could imagine a world in which you can actually pull a representative set from your production data or production-l like data to really give your prototypes a real feel for how your users are using them. And as a product leader, um, I've done this a lot in in product and design reviews where I say, "Yeah, but what happens when the user's profile is, you know, a thousand words long or what happens when um the Eiffel Tower photo is vertical and we crop it horizontal like how we thought about all these things?" And actually putting that real data in helps you stress test the user experience in a way that I think um is really important where designers are never going to put a vert a vertically inappropriately cropped photo in their beautiful Figma designs. You're never going to get that like accidental broken experience, but AI will do it for you and help you help you test some stuff. >> This is really true for um UGC experiences, right? Because the the content that users provide is never as beautiful as what we put into >> into Figma. It's nice to see how it's actually going to look to users. The other interesting thing is like if you have a set of data that you can pull out um but you want to augment it in some way. So let's say we had a trip return array but we didn't necessarily have conversations. You could throw that JSON into Claude and say hey to the JSON say to Claude augment this JSON with um information about the travelers and their conversations and it'll go in it'll start with the JSON that you have and it'll flow in um the data that you need and so you can iterate on the data that you already have to get to something that you need for your prototype. Well, and I'm not going to presume your age. We're both 21 years old. But this is also this is also making me think back to how much Lauram Ipsum I put in mockups for for for very young people. You used to have to like put placeholder text and placeholder images in your designs. And there was actually a cottage industry of like funny Laura generators um on the internet where you like copy and paste paragraphs of code or of of text. And I'm just thinking just the fact that you can put pseudo realistic content at scale in your designs. I even think about going past, you know, fake data. I've seen so many designs where designers just like grab the component and duplicate it down the page so it looks all the same and the number of comments are the same and all that kind of stuff. And I I do think it's really helpful to be able to generate the sort of like full surface area of the data model and the design experience without so much um manual burden on a designer, a content designer, a product manager, an engineer trying to figure out what goes in each component and what are the versions of of each of those components. >> Absolutely. Um, and it it's one of these things where it work it works much better with stakeholders and with users is if you have authentic data. >> I think as product builders, we're used to thinking about data separately from the UX and sometimes we just gloss it over, but users never do and so as a result, >> you know, they do um you need to kind of have something that looks and feels as it actually will be in order for you to get realistic feedback. >> Okay, we got a we got a prototype. Yeah. So, we got a prototype uh much cleaner than the other prototype. We have a list of all of the travelers. We have what's happening on each of the days. We have beautiful photos for the different things in the itinerary. Um so, I can click on day two, day three. >> Oh, look. We got a full Eiffel Tower. >> Um we've get Absolutely. And it's a beautiful shot, right? It's pulling the stuff off on Unsplash and it's working on pulling out their most popular images so they look really good, >> which is especially important when you're prototyping for consumer because people are very sensitive to this. We've got tags, we've got comments, um avatar photos. So, all of this feels much richer than um than it does if you just have it generate both the data and the functionality at the same time. Yeah, this is I mean, you know, we're doing a before and after comparison, but this is just a lot richer. It's a lot more accuracy of the data significantly improves the perceived quality of the the design. I I mean honestly, you know, there are some components of this that are similar of the old design. Not exactly, but it's really interesting to see how just having the right photo, the Eiffel Tower, the right data, some of these like metadata components, like how long it might take for you to spend time at an attraction, accurate avatars, which I like. I think the old avatars were just little um initials avatars, but these are actually like real people. their friends, Emma, Oliver, and Marcus are apparently going on this trip. >> Uh, it really it really does look like a much higher quality experience here. >> And we can take a look at the the before. >> Um, let's see. >> And yeah, you know, this is a good start, but it's not sort of the level of quality that we want and that we need. >> Yeah. And it it almost is a lot cleaner, too. I was noticing in the old prototype there's like a lot of little um emojis and things that are are filled in here and there that you as a product person or designer might not want in. But when you say like here's my clean data schema, here's the media I want and the media I don't want. Um it gives you sort of a much more modern look and feel to this experience. >> And I think that's because of this separation of concerns. the tool has been able to focus on what is the right UX around this data set rather than simultaneously figuring out the UX simultaneously figuring out the data set. >> Yeah. And what I want to call out is so many people that I've spoken to are really worried about getting design systems into these prototyping tools but have really underinvested in what you're showing which is like the data models. And I was actually talking to somebody yesterday and they said, "What context do I need to make sure I always give my PRDS and my prototyping tools to generate?" And I said, "Get your engineering to give your definition of your data schema and just copy and paste that in." That is like one of the first things I think you should do because it's the right level of constraints around the experience. And you're just showing sort of the next level of this, which is populating that, extending it, and then putting it into a prototyping tool. And then what's really nice about this, and we just generated on the fly, so I'm going to have to see where the code is. But if we go into the files, we have a nice breakdown. And if we go into lib, which is often where the data ends up, we have a sample data um file. And so we can go in and we can change this. Let's say, you know, actually we want Marcus um to be called Mark rather than Marcus. We can go in here. Let me see where Marcus is. change his name to Mark. Just need to reload. >> And now we change Mark. >> Same thing with the photo. Like, you know, this is kind of a good Paris photo, but we can probably find something better from Unsplash. So, let's just search for Paris. This one's a great one. Copy the image address. Come back here. If we look, we've got the cover photo. We can just replace that. And then again, reload. Yeah. Oh, >> new photo. >> It looks so nice. >> It does look really nice. It's coming together. >> This episode is brought to you by Persona, the B2B identity platform helping product fraud and trust and safety teams protect what they're building in an AI first world. In 2024, bot traffic officially surpassed human activity online. And with AI agents projected to drive nearly 90% of all traffic by the end of the decade, it's clear that most of the internet won't be human for much longer. That's why trust and safety matters more than ever. Whether you're building a nextgen AI product or launching a new digital platform, Persona helps ensure it's real humans, not bots or bad actors, accessing your tools. With Persona's building blocks, you can verify users, fight fraud, and meet compliance requirements, all through identity flows tailored to your product and risk needs. You may have already seen Persona in action if you verified your LinkedIn profile or signed up for an Etsy account. It powers identity for the internet's most trusted platforms, and now it can power yours, too. Visit withpersona.com/howi to learn more. you're replacing these sort of peace meal, but if you wanted to stamp out a bunch of different versions of this completely, you're working with just the the data file, right? >> Yeah. So, we can actually just go back into >> Claude. >> And now we can just say something like now generate an itinerary for the same travelers going to Thailand. >> Yep. >> So, I'll get that. This um chat already has all the context. It knows what the schema is. It's going to go back out to Unsplash to grab all those photos and it's going to generate a new itinerary. Same people, different trip. And then once we have that JSON file, we can actually take that and apply it directly to um the prototype. It it just I was I was just thinking about again going back to when we had to like walk uphill both ways in Photoshop for our designs like the speed at which you can create versions of your design is really helpful and you know one of the things that I'm thinking about here is great go ahead and localize this in Spanish or go ahead and localize this in another language let me see what that looks like. Um, or even when you want to extend the design, going back to, and maybe this just my my engineering brain likes this, go back to, well, let's update the data model first and then let the design cascade out of the data model as opposed to putting buttons on the front end and then working our way back into into the data model, I think is just a really nice primitive on which to standardize your prototyping efforts. >> Absolutely. Um, and then it allows you to be much more flexible in terms of what you're doing and allows you to work on the functionality separate from the data model. So, for example, here, let's say I wanted to add a feature where, you know, I want to be able to see blank cards if people have time in between activities. So, we can kind of see where the free time is in the day. It'll go in and it'll implement this functionality using that data set. So if we want to put a new data set set in here or change anything, the functionality is totally dynamic rather than baked into the prototype, which often is. >> Awesome. I really like this. And you know, again, I we're looking at the data modeling, the design of things, but I'd be remiss not to mention how helpful it is to have the content researcher, especially on a consumer experience, of what hotel should I put in, what attractions should I put in, what do they actually look like? And maybe maybe your designer's been to Paris, maybe they have not. Um, and you certainly don't want them spending time googling like the the top, you know, hotels in Paris for people in their 20s and 30s. This does, in addition to doing the scaffolding, it actually does the right research on the content, what to put in here, and feels pretty realistic. >> And you might actually have two different itineraries. One for, you know, someone old who's older who's going to Paris for their third or fourth time, someone who's younger who's going to Paris for their first time. And then you could test this feature with itineraries that really make sense for the user who's using uh the tool. So now we have the free time cards added in. We've got four hours, four and a half hours between um checking into our hotel and our dinner. We can look at day two and see that we've got uh a couple of other time blocks in here. So now um Claude is completely done generating our Thailand itinerary. And so we can actually just swap out the itinerary. If we go into the code, um, we can see, okay, we've got, uh, the data here. Let me, um, copy that over. And then I can just replace this and then reload. >> And now we have a Thailand trip. >> That was thrilling to watch. >> All super easy. >> And the those free time blocks stayed. We have great photos here. Sophia and Emma. I bet he's named Marcus, though, because we made that edit in a different tool. So, >> I think so. Yeah, if we were Yeah, we look we we over wrote that, but that's easy enough to change. >> Yeah, that's easy to change. >> We can go back here uh and update it. >> This is awesome. Okay, so to recap for folks, instead of just prompting into your prototyping tool, you can use your favorite general LLM tool, in this instance, claude, to actually generate a JSON schema of the data that you want to represent in your experience. Go into your prompting tool, your prototyping tool, say, generate a user experience based on this data. Paste the data in. Um, then use that data schema as the basis for iterations and updates and then even swap out that data schema with entirely new content and watch how your experience um, you know, adapts to new content or show that to different user segments or just really impress your boss. So, >> yeah, absolutely. >> You've completely changed. I think this is going to be just as popular as uh, Ryan Carson teaches Claire how to appropriately vibe code. This will be you teaching me how to actually bring some data and structure to my Vibe designing and prototyping. This is genius. I'm really excited. I know you're working on a blog post. Um, if it's out, we'll we'll link to it. But this is this is great. And again, one of the things I want to call out, I am just a really lucky B2B enterprise girl. We just work with like form fields and buttons. Like the fanciest I get is like my button has a gradient in it. Um, but you with your your consumer product background get to work a lot with media. And one of the things that we're looking at here is the media aspect of it. The photos really do make the experience. >> And so Absolutely. >> Yeah. We use the Unsplash MCP to get these real kind of like free stock images here into your prototypes. But I know you've also been working on generating great photos yourself. So, you want to show us a little bit about how to use midjourney again, unlike how Claire does it, which is just like float in the midjourney model till something cool comes out with a little bit more structure than that. >> Yeah, let's do it. Um, so I was playing around with midjourney a lot trying to get um mock data for a project I was working on. Um, and I was working with a designer named Finn Sturdy, uh, who has done just a brilliant job of kind of figuring out how to get stellar results out of MidJourney that feel really curated and feel like a creator director has, um, helped to design them. And as we looked at like how he was prompting things, we discovered a few things about how you can use really specific wording within your prompts to elevate the images. Uh, so let's just start out with like a very simple simple prompt. And I think what's great about these tools is even if you're not very descriptive, you're still going to get pretty good results. I mean, you could see the the content on MidJourney is already beautiful no matter what you're looking at. >> Yeah. >> But let's say we want to just have a stock image of an office chair. >> You might just type in office chair. I know you're probably going to type in more than that. Uh but let's see what it generates. So, it's going to go through. It's going to look at references. These are still pretty nice office chairs. Um the photo is nice. Mhm. >> But is it really usable? Like is it the sort of thing that you would drop into a catalog or something like that? It probably isn't there >> yet. >> And the way that we can get to a much better end result is to think about three things. The subject, the setting, and the style. So I'll use a new prompt. This is um this will actually generate much better results. We're very clear about the subject that we want, which is an empty, stylish office chair. And then really clear about the setting. And the setting includes both the both the placement in the room and the lighting. Um, lighting is a really key part of setting that photographers think a lot about. And if you talk about the lighting in the prompt, you're going to get much better results. And then the last thing we want is a particular style. Um, and there's a couple of things that can help with defining style. the sort of thing that a lot of people do is they try to describe it. Um, but that's generally not how photos are tagged. Um, and so the idea here is to think about, well, how would a photographer actually describe a particular photo? Um, and they might use cultural references or location references. And then often times they'll use camera metadata or other information about the shoot. And so in this particular case, we've added in uh a keyword for the film stock that we want to emulate, Fuji color C200, which is a very warm uh film stock that generates um really beautiful kind of like golden hour type of results. So I've started that generation and now we're going to get something much more usable than the initial prompt. >> Yeah. And as somebody who was a early midjourney user, I think midjourney is like the gateway drug to consumer AI. If you have like a parent that is not yet bought into AI or does not understand what it can do, >> get them in Bid Journey. You're going to get some weird Facebook posts, but they're going they're going to be unlocked on something really really special. But what I would say is I'm shocked at how fast it's gotten. It used to be so tediously slow and it's fast now. Okay, we got a pretty I mean I want that to be my office. Who cares about the chair? >> So beautiful. We've got the Italian kind of cultural cues. We've got the beautiful lighting. It's a great chair. This is definitely usable photo. We've also, you know, I think one of the things Mid Journey excels at is uh giving you these different variations. So, here's another one. Another one. Another one. All really beautiful and very usable if that's something that you're looking looking for. And what was key here is thinking about the prompt in terms of how a photographer might describe things rather than uh telling the AI what you want. So for example um you know we could change the setting and a photographer is probably not going to say I want really soft lighting. Instead they're going to describe the setting which is an autumn raining morning. So, we're going to take this exact same scene and move it into um a different lighting setting, which will change the mood quite a bit. >> And as one of the the four people in technology that have a liberal arts degree, I have to call out, I do think this moment where we're using natural language to generate assets across the board, especially media assets, art literacy is really important. The ability to describe design is really important. I don't think people spend enough time articulating what taste means, articulating what elegance means, what quality, what style means. And even going through this practice of understanding reference art styles, reference, you know, devices like digital cameras or film, um, locations. I I think like language is now such a foundation for technology that if you're not investing in your linguistic skills, um you're going to miss out on your ability to create these like high quality assets, at least at this point. >> I totally agree. And I think the two, you know, fundamental inputs into creating something are taste and craft. And so taste is the ability to know what's good or what's what you want. Craft is the ability to actually achieve that vision. And with AI, it's completely 10xed everyone in terms of the craft. Anyone now can create photos or music or other other things, but the taste is really important. How do we take that incredible power um and use it to create something that meets the needs that we have whether creatively or professionally? >> Yeah. And if you don't feel like you have the natural skills to look at a photo like this and say, "What makes a photo like this make me feel this way or look this way?" hack is take this photo, drop it in like a clot or a chat GPT and like have it describe it back to you in language and it can give you it can kind of be a train an exercise ground for giving you language into these prompts. So I think people need to put a little effort into um training themselves how to how to write good prompts and kind of backwards engineering out of things that they like how they might describe them better. And as you develop that understanding, you'll also develop a vocabulary around it, which I think is incredibly powerful for prompting. >> Yep. Great. So, we have a beautiful tray. It's a trestle desk. >> Got a beautiful chair. >> It is an autumn rainy morning. >> Another one. You've got the raindrops. You've got uh Milan in the background. >> Y >> another one. Another one. Um so, very quickly, we've gone from this kind of beautiful morning light to this kind of softer light. not by describing the light itself, but by shifting the setting and having the LLM kind of figure out what is the appropriate lighting for that setting. >> Now, I have to ask a question. Is this photo and this prompt a reflection of being a parent that works at home and just imagining what it would be like to have a completely empty clear desk looking out at Milan in a stylish probably very expensive easy to ruin design within reach here. >> That would be amazing. One could dream if only you could see my desk right now. It's tiny and filled with, you know, battery and stream deck and all sorts of stuff. >> Exactly. So, again, you can use Midjourney for commerce and for escapism here. >> 100%. Yes. And then if you rather be in New York than Milan, >> you know, here we've just added uh New York to the photo and said it's a modern glass American architect's office. And what's interesting, I think a lot of times people think a lot about the foreground in their photos, but not what the background is. And the background is just as important to providing a sense of place. And now, okay, we >> go ahead. Yeah. >> So, remind me. So, remind me again. We need subject, we need location, we need light, and just pick film or a camera as your cheat code. >> Yeah. So, subject, what do we want? Setting, where is it? um which includes the lighting and then style. Um how do we want it to look? >> And you and I were hypothesizing before the show that the reason why film or camera could be a cheat code here is we suspect that these models are trained with a set of metadata that includes that information. And so when you narrow in on that data set, you get a more um refined source of content that's that's generating these images. >> Yeah, absolutely. And like let's show an example of that. So film stock is often metadata that is associated with a photo. And so here I've changed from the Fujif film stock to Kodak Trix, which is a really beautiful >> black and white stock that um >> is very contrasty and has a lot of grain. So rather than telling the LM, I want grain, which can often overdo it, or I want contrast, we're saying we want it to look like Kodak Trix. Oh, it is really beautiful. >> And you can kind of get a sense of it. It's not quite, you know, just this just creating something black and white. It's a little bit more than that. A little bit more contrast. Um, a little bit more character to it. >> Yeah. A little bit more texture to it. >> And I think what's interesting here is there's a lot of meaning that comes with the film stocks. So, not only are we getting a different style, we're also getting interesting compositions. And I think that goes to how these models were trained because, you know, they're trained on data that likely has descriptions or metadata around the images and they're trying to create a mapping between the language and the image. And so when you use photographic language, it looks at the higher quality uh photos in the data set. >> Well, and you know, there's a lot of and rightfully so anxiety in the arts and the creative professions around some of these tools. And when I hear you speak about how to get higher quality assets out of here, I think what a head start folks with a creative, with a photography, with an arts background actually have in this kind of world where it still is really anchored in the technical aspects of the media. And so if you know photography terms, then you can actually prompt. I've seen a lot of like f-stop, you know, terminology and prompts, all those sorts of things. you can actually prompt it significantly better than someone off the street who's like, I know how to write code, but I don't understand what Kodak film stocks are out there. And so, you know, I just I do think for folks in the arts, I hope you can look at some of these opportunities and see where you actually have a leap ahead of folks and bringing something like this with your own creative vision, with your own um photography or art, I think is going to be a really interesting way people build even, you know, more amazing things in the future. So, I think it's it's awesome and I want to see more people in the arts actually in here. >> Absolutely. This lowers the barrier. Um, you know, and there's good things about that and bad things about that. Yeah. >> But I think it's better, you know, the more people in the world that feel empowered to create, uh, the better we'll all feel, the better. >> And not everybody can afford these fancy cameras either. >> No. >> You know. Um. Oh, amazing. Okay. So, you have a a person now that we're generating. >> So, I'm generating a person. Um, so young man with brown hair uh and eyes at golden hour. And I've added in some of that camera metadata that you were talking about. So Leica, that's like an $8,000 camera, right? So that's not accessible to most people. But by mentioning it here, it puts the image generation model into the space that it's learned from around those cameras, which makes for more beautiful and more aesthetic images. 50mm lens is a very common focal length for portraits. Uh, f1.2 2 says, "I want a really blurry background, like an incredibly blurry background. So, it's kind of ethereally ethereal looking." And then Fujifilm Provia um is a good portrait film stock that people use. Uh and so here we've got a great uh image that embodies that. Uh and we can go through and see the other ones. And all of these have sort of an aesthetic quality that's sometimes hard to get out of AI. They're not they're not in that uncanny valley that we often see with um image images that are generated of people. And I can kind of see show you, you know, if we actually generate an image, but we don't include any of that camera information, sometimes the results are more in that uncanny valley. >> Now, what I do have to call out here is all of these generated images have a quite mournful aesthetic to them. There's rain. These men are looking very concerned through these windows. And so I'm going to challenge you after we get off the podcast. I want you to email me a happy young man. >> Okay. The bright. >> We will do that. >> Okay. Oh, so you did do a portrait of a young man with brown hair and eyes. So same original subject prompt without all the location, lighting, film, camera, metadata. and we got like sketches. >> Yeah, we got sketches and that's fine because we didn't tell the image generation model that we wanted a photo. Um, one of them is a photo, but you could see this photo kind of has that uncanny look to it. It's a little bit >> too perfect. Um, and that's because it's trying to use the average of all of this training data >> rather than these photos which are saying, "Okay, I want to actually be in the um realm of training data around high-end photography." Okay, you have given us such a great way to structure our prompts. I have given everybody homework to study the arts um to make your yourself better at these AI tools. Let's do a couple lightning round questions and get back get you back to your very uh farre work of generating JSON to do AI prototypes to AI midjourney photography prompting. So um you know as I said enterprise girl B2B square boxes and forms what I love about what you're showing us is there is a lot of work in consumer that can be really accelerated by AI sort of two questions on this point. One is what do you think AI PMs and product teams in consumer products really need how how are the skills that they need to develop different than maybe ones that are working in BDB? What do you think of the opportunities are for consumer product teams with AI? >> I think AI for consumer is incredibly exciting and there's a whole lot of consumers that are using the big tools like claude and chat GBT. I think one of the nice things about B2B is the ROI of AI is usually pretty clear like if we can accelerate a workflow um we can make someone faster, we can make someone more capable. The reason why is is very clear and so businesses are adopting this stuff very quickly. For consumers, it's not always really clear what the consumer value proposition is and what problem you're solving for them. And not every problem is worth solving for consumers. So just because you can do it with the technology doesn't mean that people want to actually do it. And so I think really good consumer AI is grounded in an understanding of consumer psychology and consumer needs and then maps in well how does AI fit what that psychology and those needs rather than starting from a technology first solution and saying okay you know we could do all these really cool things let's create a consumer app around that um and hope that we're solving a need and so there's a little bit of magic I think that has to happen with consumer and the way I think to do risk that is by focusing on those needs and really understanding the underlying psy psychology. >> Well, and I would just say as you were saying that some of the kind of like psychological needs that I think are underserved simply by the limitations of technology, time and space on teams are like extreme levels of divi delight, which is how can you create really rich, engaging, delightful experiences, those like beautiful, you know, parts of the app that tend to get shaved off in scope reduction exercises. I think that's a real opportunity. And then the other thing is making products feel really personalized either to the place you're at, the people that you're with, or what what we know about you. And so it doesn't have to look like a chatbot, but if you can think what what could I do today that I couldn't do yesterday for this user, I think there are a lot of answers where AI really unlocks your ability to deliver something very special, even if it looks like a tag or a comment or a photo. Um, and so it's, you know, what is the tool behind the scenes versus what is the expression of the product. I think you can differentiate a little bit. >> And I think that delight piece is so important. A lot of times as PMs, we prioritize things as must have, nice to have, won't do. And I used to tell my teams, if we cut all our nice to haves, our product is not going to be nice to have. And we have to reserve some of our time for the delightful things that make the product stand out. >> Yep. H I love it. Okay. And then my last question is other than giving it reference locations where you fantasize yourself to be uh when AI is not doing what you want, what is your prompting strategy? How do you get it back on track? Do you have any tricks? >> I try to be very encouraging and I've been using the word elite a lot. So you are an elite sales coach or you are an elite photographer and so just elevate its expectations of itself and sometimes that will uh help it generate better results. And I think what it's doing is it's a lot of prompting is like what space of the training data set do you want to be in to get a result? And when you use word the encouraging words, it's not that you're actually encouraging the AI, it's that you know those words are associated with really high quality output and it puts it in a different training space. >> Okay, I love it. And again, I think this is the prompting strategy of choice of parents who are always telling their kids like you can do. I know I know that you're a capable kid. I know that you can do I know you can do your homework. Okay. Well, where can we find you and how can we be helpful to you? >> Yeah. Um, so I've got a Substack. So, Robbie on Product um and you can find me at robbie-.com. Um, you can also find me on on LinkedIn. So, please follow me. I've got um a class that Brian Balffor and I launched with Reforge. It's on AI strategy. So the question that we were answering with that class which is really important for us as product builders is not only how do you understand techn the technology how do you integrate it into your product but what does this mean for you competitively what do you need to do to create a product that's going to win in the market I think in the most intense environment that we've seen in the history of tech. Uh so we launched that in April had a really great first cohort. Uh we're uh launching the next cohort in October. So check that out if you're interested in learning more about AI strategy that's available through Reforge. >> Awesome. Well, thank you so much for showing us all your amazing workflows. They're very useful. >> Awesome. Thank you so much for having me. This has been a really fun. >> Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube, or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiipipod.com. See you next time.

Summary

Ravi Mehta demonstrates a data-driven prototyping approach that prioritizes structured JSON data models over traditional design or specification prompts, enabling higher-quality, more accurate AI-generated prototypes for consumer products.

Key Points

  • Traditional AI prototyping often fails because it asks the model to handle too many tasks at once, resulting in average quality outputs.
  • The core insight is to separate UI design from data modeling by first generating a structured JSON data schema for the desired experience.
  • Using LLMs like Claude to generate detailed JSON data models ensures the prototype has authentic, realistic data from the start.
  • Integrating external tools like the Unsplash MCP allows AI to fetch real, high-quality media assets instead of hallucinating broken links.
  • Once a robust data model exists, it can be used as a foundation to generate a much higher quality UI prototype with better context.
  • This approach enables rapid iteration by simply changing the data model (e.g., swapping Paris for Thailand) and regenerating the UI.
  • The technique also allows for testing edge cases by populating data with real-world variations like long profile names or poor-quality images.
  • For media generation, using structured prompts with subject, setting, and style (e.g., film stock like Fuji C200) yields more usable, high-quality images.
  • Understanding photographic language (e.g., lighting, camera, film stock) is crucial for effectively prompting image generation tools like Midjourney.
  • This data-first approach is especially powerful for consumer products where media and authentic user data significantly impact perceived quality.

Key Takeaways

  • Start your AI prototyping by defining a structured JSON data model first, rather than a design or feature description.
  • Use an LLM like Claude to generate a detailed data schema, including all necessary fields and relationships.
  • Integrate external tools like the Unsplash MCP to fetch real media assets and avoid hallucinated URLs.
  • Paste the generated JSON data into your prototyping tool to create a high-fidelity prototype with authentic context.
  • Leverage the data model to easily test different scenarios by simply swapping in new data, enabling rapid iteration.

Primary Category

AI Tools & Frameworks

Secondary Categories

AI Engineering Machine Learning AI Business & Strategy

Topics

AI prototyping JSON data models Midjourney prompting Unsplash MCP Claude Reforge Build vibe prototyping data-driven prototyping structured prompting film stock AI agents real data media generation

Entities

people
Ravi Mehta Claire Vo
organizations
Tinder Facebook TripAdvisor Xbox Reforge Google DeepMind Gemini Claude Midjourney Unsplash Persona
products
technologies
domain_specific
technologies products

Sentiment

0.85 (Positive)

Content Type

interview

Difficulty

intermediate

Tone

educational instructional entertaining technical professional