“Vibe analysis”: How Faire uses Cursor, enterprise search, and custom agents to analyze data

howiaipodcast KOr-xQuNK4A Watch on YouTube Published November 02, 2025
Scored
Duration
1:03:29
Views
9,535
Likes
180

Scores

Composite
0.51
Freshness
0.00
Quality
0.83
Relevance
1.00
13,024 words Language: en Auto-generated

How do we start at the very beginning of analyzing a product and its quality and its usage through analyzing conversion rates? >> The new AI tools have just absolutely transform the process of just getting all that context. You can go as broad as you like self-s serve into an unfamiliar topic just incredibly quickly. And that means you can not only deliver quicker analysis, you can just deliver much better analysis too. I'm going to start just by doing an enterprise AI search. So, I'm just going to start very simply by asking notion, what experiments were new features launched between September to December 2024 that could have added friction to the checkout process for new retailers in Europe or North America? And I've just said focus on XV docs, PRDs, and launch announcements. I've got straight away a really interesting list of hypothesis to dig into with no work. You can see it searched across Slack, notion, Jira, and everything else very, very quickly. So Alexa, how do we do actual analysis of data when we've identified a problem or an opportunity we want to go after? >> Without AI, especially the context gathering would mean hours spent digging through all the specs and PRDS. Writing SQL queries from scratch and then you know spending a lot of time writing and editing a doc using cursor to actually create edit write SQL has been pretty gamechanging. Welcome back to How I AI. I'm Clarvo, product leader and AI obsessive here on a mission to help you build better with these new tools. Today I have a great episode with Tim and Alexa from the data team at Fair. They're going to show us how you can use cursor MCPS chat GBT and even write your own agents to do data analysis. We're going to see everything from decomposing that scary question, "What went wrong?" in September to doing detailed funnel analysis on experiments and surveys. Let's get to it. AI is supposed to make work easier, but I've been there. Weeks of setup, endless back and forth with engineering, and yet another tool the team never really adopts. That's why I use Zapier's AI orchestration platform. It connects with nearly 8,000 apps, so I can finally put AI to work without the drama, without the delays, and without pulling engineering in every time I want to automate something. With Zapier, you can roll out AI powered workflows that do real work across your whole company in days, not weeks. I use Zapier every single day. It automatically responds to leads with enriched personalized data. It checks my calendar weekly and offers smarter ways to manage my time. And it even drafts emails for every new request that lands in my inbox. All of that running quietly in the background so I can focus on the work that matters. And Zapier's built for scale with enterprisegrade security, compliance, and governance. It's trusted by teams at Dropbox, Airbnb, Open, and thousands more. Go to try.zapier.com/howi to learn more about how Zapier can bring the power of AI orchestration to your entire org. Alexa, Tim, thank you for joining How I AI. >> Well, great to be here. Thanks for having us. >> Thank you so much. >> One of the things that we can do now that I am probably personally causing in the in the internet world is we can just build a lot of a lot of product. I am always out there like I was thinking the other day I was like I'm going to tweet something where I tell PMS that they should just spend a month saying yes instead of saying no. Like let's ship some features. And I think AI has really accelerated product development, software engineering, getting innovation to the hands of customers. But the problem it has created is we don't know if those products are any any good. So the the perennial uh product problem which is you can ship things and they can not make the difference that you hope they would make. And so I'm really excited about this conversation because you are going to show us how to use AI and even some of these tools that software engineers or product managers might be familiar with to do really deep meaningful product analysis and I spent a lot of time in experimentation and so I love a good conversion rate optimization. And so Tim, we're going to kick it to you to start with. How do we start at the very beginning of analyzing kind of a product and its quality and its usage through analyzing conversion rates? >> Yeah, I love this. I think everyone's talking about Vibe coding, but no one's really talking about Vibe analysis and we're heading in that direction very quickly. So, uh, let's get into it. Um, so before we do anything too technical, I think we want to share a really broad range of examples here from the really complicated to the like actually incredibly simple. I think everyone knows PMs are going to have to become engineers and then we've got a lot of issues where all of you guys are going to have to come and analysts as well. Um, so I think there's a lot we can show here. So we want to start off with just a really simple use case that should be familiar to I think everyone listening. Uh, but I think it illustrates the point. There's often the most simple AI tools that can actually have the biggest impact here. Um, I think before we get into the actual demo, I think it's useful just to pause very quickly for a second on the question of what analytics actually is. So, I think once you break that down, you get a much clearer view of where these current tools can be most valuable. Um, I think most people jump straight to the nuts and bolts for actually manipulating and crunching data. But actually, it's really just a small part of the O4 process. And the most important often most difficult thing is actually just getting the right context in the first place because that's what separates good analysis from bad. Like you need to know to ask the right questions to come up with the right hypothesis to know what analysis is even worth doing in the first place. You need to know where the data lives and you need to be able to interpret it all very um very well. And the new AI tools have just absolutely transformed the process of just getting all that context. you can go as broad as you like self-s serve uh into an unfamiliar topic just incredibly quickly and that means you can not only deliver quicker analysis you can just deliver much better analysis too. Um so to illustrate the point I want to talk through what sandy I'm guessing is a very familiar situation where a business metric suddenly drops off a cliff uh and no one's got a clue what to do with it. Um, so I'm actually I'm going to use a real example from fair for this. Um, and this happened to our new customer conversion funnel at the end of last year. So if you've ever worked in growth, everyone's going to know new customers, they're just extremely sensitive to even the tiniest little friction. So almost anything anyone does in the business anywhere can affect these kind of things. Whether it's a signup flow, a search algorithm, a shipping policy, like this all can affect these things. Um, and if you're not careful, you're going to have to decomp the entire business. So, let me show you how these things can just be done so much quicker. Um, so imagine this problem lands on my desk. Um, I might look at a couple of just existing dashboards that exist to say, uh, what's going on here? And you can see, uh, very quickly the issues started in September and there was another drop in December. And it seems to be concentrated in the checkout stage. But beyond that, I've really got no idea what could have actually caused that. So, let's start really bored. I'm just going to share my screen. I'm going to start just by doing an enterprise AI search. Now, we use notion, but frankly, every document system now is going to have an AI system. If they haven't got one yet, it's coming. And they are just game changers. So, I'm just going to start very simply by asking notion what happened. Okay. So, the only thing I'm going to do, I'm going to just make this more realistic. I'm going to filter the date range. I don't want it cheating and looking at the answer. It's only going to have access to the things I had access to when I actually did this. So, I'm going to put it up to the end of April last year, which when I run it, okay? And then we're just going to get that running. So, if you think this, all I've asked is what experiments or new new features launched between September to December 2024 that could have added friction to the checkout process for new retailers in Europe or North America. And I just said um focus on XP docs uh PRDS and launch announcements. Okay. So, if you think about what I'd have to do in the past, I'd have to be like crawling through a million documents, doing a load of searches, going through a ton of uh different Slack channels trying to work out what's going on. And instead, look, I've got straight away a really interesting list of hypothesis to dig into with no work. And you can see it searched across Slack, Notion, Jira, and everything else very, very quickly. And uh if you let's just pull out a couple of these. So, what's happening? So, let's go. So, you've got uh clearly we launched some kind of um checkout experiment around this time. That's definitely worth looking in. Uh we've done something with a checkout blocker in Europe. Okay. Lots of interesting things to dig into. Now, with a couple of clicks, I've got a good long list, but I don't really know what these things are. So, I've got all the links of the extra documents I could go click into, but let's just ask as a starting point, uh what is Aori? Let's pick one of them. What is Aori? So, we'll just ask that. It's going to run another little search and give us more things. Now, um you've got a little bit here, but it's going to start bring up a little bit more information uh to just get a bit more um a bit more detail on this thing. So, let's see where that goes. Okay, so very quickly it's saying give me the term what it is. And you can kind of see it's okay. It's a um a regulation that's involved in Europe and someone's done something to uh start asking for more details. Clearly trying to improve checkout and conversion rates and they're trying to bring that one in. But I think this is a great starting point. I've got some detail, but I think what's really interesting here is everyone knows like a PD is one part of the story, but between a PLD being written and something going into the codebase, a lot can happen. So to actually understand what's going on, you usually need to go one layer data into the actual technical implementation. And I want to show you like a quick trick uh of how I do that. Um, so I think one of the best things about these AI tools is just the ability of someone who's like nontechnical to access things that they couldn't previously access. And a great example of that is just being able to talk to the product codebase. I'm not an engineer. I can't write Cotlin or Swift. I used to be a lawyer for God's sake. Um, instead I can run a deep research against our codebase to find out exactly what got implemented for this particular feature and when. Now I'm going to do this in two different ways. I'm going to do it on chatbt which I think is very simple and anyone can replicate incredibly quickly. Everyone's familiar with it. And I'm going to do it on cursor which is a bit more specialized but just incredibly powerful. Um so I'm going to open up a new chat and I'm going to put it into deep research mode and make sure my GitHub is connected. So all you do, it's not technical to do that. You just need to say yes a few times to get your GitHub connected. Um the only reason you do it on on deep research is just because it's the only way you can actually access it. It's going to search our codebase now um in exactly the same way it would normally search the web on a deep research. So I'm just going to put in a prompt. Let's just copy that in. Now let me talk a little bit about what this prompt is doing. So I've given it a role. I've said you're a senior staff engineer and you've got expertise in all these different code bases cotlin swift typescript and you were working at fair and I've given it a task to say please conduct a forensic investigation of the codebase to produce a comprehensive timesequence report on all changes to the eoree collection process at checkout between June 24 and February 25. So just making sure we don't miss anything and the rest is just a bit of detail as to what I want this to look like. So, I've said I want an exact sum. I want a table with all the different PRs and commits, what they've gone into, and I really wanted to focus in on the actual impact these commits had on the retailer experience. Like, explain it to me in layman's terms. Um, and then I've just put a few requirements in here just to give it a bit more context. So, be precise, simple, clear language, only use GitHub sources. I want to call out here um you're you're using this prompt in the context of sort of a what I would call like a business incident, right? New user signups just dropped. But this is a prompt that I want the engineers watching or listening to the podcast to really pay attention to because if you're in the middle of a, you know, SEV one incident and you need to trace who did what. I know so many of our engineering teams are looking either manually looking through code, looking at these specialized kind of codegen tools to do this, but probably aren't reaching for something like ChatGpt deep research to just go ahead and do this for you. And if you're a product manager looking to be helpful during an incident, this is maybe a task you can take on on behalf of your engineering team just to provide some additional context in the background. >> 100%. I think this is great for engineers. I think it's great for just getting people to talk better to engineers. I think there's just so much you can do here. So, as always, De Research is asking a few questions. So, uh, use discretion. We'll just answer a few of those to make sure we got it. Uh, use discretion and yes, please. So, that'll get it going. But no, >> you you prompt just like I do. I just say you pick, you decide, you go. I don't care. >> I think the fact the pro doesn't ask you these questions make me think it's more to like make you feel like it's doing it rather than anything else. So that's going to take a bit of time. So while that's running, I want to show you how to do this in cursor because I think >> cursor is one of those tools that everyone thinks of for vibe coders. They think of it for engineers. They're not really thinking uh about what else it can do. And I think for both analysts and non-analysts alike, it's an incredible tool. So, um I think more and more people are talking about the phrase context engineering rather than prom engineering. I love that. Um it sort of actually explains what we're trying to do here. And for me, just cursor is the ultimate context engine. You can hook it up to MCPs. Um so basically, I can hook it up to every single system in our business to get all the data I need. And that just makes it such an incredibly good accelerator for getting context from doing analysis. So I actually find increasingly this is getting better results than deep research on TPD. So both are good, both are game changers, but I think this is just a little bit quicker and better. So I'm just going to make sure my uh MCPs all hooked up. And then all I'm going to do is I'm going to drop exactly the same prompt into cursor and we'll see the two running. So exactly the same prompt. So just for context, we are not even started on our uh hasn't even got off to the races at all on on the chat. And straight away in uh in cursor, we're going and finding it's got a nice to-do list. It's saying it's going to search all the right things in GitHub. It's going to then forensically analyze it. Uh and we'll just let this run for a little bit. You can see it's already starting to pull in the code and the pull request. everyone >> one of the things that I think is interesting to call out is you know I've run a lot of product engineering data orgs before engineering certainly day one what are you doing you're getting access to all the repos you're getting set up with GitHub you're pulling your your local environment together I know that data teams often have a similar onboarding because they're working so closely with production data one of the things I think is going to change or if it hasn't already should change right now is I think product managers and designer onboarding first seven days has to include access read at least read access to GitHub, getting your local repository pulled down, getting all your MCPs set up because it just code has become now a data source for anybody doing work, not just people writing code. So I look at this and I think leaders out there need to pay attention and rethink basically their onboarding process because you don't want to be in a situation like this and go like can somebody give me GitHub like can I can I get access >> it goes even beyond that like everyone should have access to every system and it should be from day one like these tools are just the best on boarding accelerator we've seen it for analysts we've seen it for engineers suddenly people get the context very quickly okay so we're already off it's summarized everything it's written us and we're actually starting to write things out here so straight away you can see I've got a nice exact summary it's g a few things but this this is what I was most interested in. Okay, so I'm getting a table here. For those that can't see my screen, I'm getting a table with every single PR that affected this part of the flow from look, it starts in July 24 all the way to still going. Uh, but it'll probably go to somewhere like December or February depending where it's going to go with all of these things. Now, let's just call out what this is doing. So, it's giving me an exact link to the specific PR that actually pushed this into uh the codebase. It's giving me the name of it and it's giving me a summary of what it did. It's saying who was affected and it's saying what was the impact on a retail experience. Now, if anyone's done this kind of thing, it is so difficult to do and actually like pick through all the codes and actually understand what's going on on this and it can just be incredibly quickly. And so, very quickly knowing nothing about this feature, I can already start to get really smart on what happened. And I can see if I dive down here yet, you can see there was an experiment launched in midepptember, right in the sweet spot of when this uh drop first happened. And if I scroll through getting through to looking at December, uh yeah, you can see it launched all treatment all users went live. So this now looks like a really interesting potentially smoking gun that we can debug into. And so instead of spending days talking to people about all the potential hypotheses, uh I can now speak to exactly the right colleagues and have a really targeted conversation, an informed conversation right from the off with them uh to crunch through this problem in a matter of like hours rather than weeks here. So even if we've done any data crunching, this can just be absolutely gamechanging for us. >> Yeah. And it allows you to go a lot deeper than you know I've been able to do historically on these kinds of analyses. You know, when you're running these high velocity experimentation programs, you have so many concurrent experiments. You have experiments colliding with rollouts, colliding with just plain launches. And just trying to decompose what was the state of your app on any single day is really challenging. And even if you can do the manual research to get this at a feature level, like yeah, today we launched the one one page checkout. I think the real challenge is well, did we implement it well? Is there anything in there that we should like worry about? did we exclude any users from that? Like, and so I do think the ability to use code as a a detailed source of truth when doing these kinds of forensic analyses really makes the difference in figuring out what's going on with your business. >> And then getting smart enough to go one level deep as well. You can ask follow-up questions to say, uh, how did it differ for different segments? Are there other ones interested? Like you can get so much detail just by asking questions on these kind of things without speaking to any engineers. And this gives me a little bit of some inspiration on other use cases for querying your codebase in GitHub history for events. One of the things that I do very frequently is I do a very similar analysis to this, but I say what is everything that shipped in the last week from the context of a customer and then I use it to write my newsletter. So again, like I'm starting to use our codebase as a source of truth for our marketing materials. I don't have to proxy through like what was in the PRD or what did a PM write or any of that stuff. I'm just like just tell me what was in the code in the code commits because that's what I know went live. It can interpret what the customerf facing experience and intention would be and then you can create these really interesting business and market facing assets out of that. So I just think the ability to query your codebase and your GitHub history for any use case including this one is really useful. >> Yeah, I love that. >> Great. Now what what do we do after this? So you've identified you have a conversion rate problem. You've identified maybe a couple sources of the issue. you're going to go talk to your colleagues, you're going to look at the code. Um, how do we actually do some analysis or I know we said we were going to do some vibe analysis and we have seen very few numbers. So, Alexa, how do we do actual analysis of data when we've identified a problem or an opportunity we want to go after? >> Yeah. So, obviously like a quite classic analytics task. I'm going to take us through, you know, we launched a new product feature and we actually want to understand how it did. So, I'll take us end to end from understanding how the feature was built, analyzing its performance, and then producing a summary that could eventually go to our exec team. Um, like Tim kind of touched on, without AI, especially the context gathering would mean hours spent digging through all the specs and PRDs, writing SQL queries from scratch, and then, you know, spending a lot of time writing and editing a doc. So with AI, I can pull context similar to what Tim just did directly from the codebase. I can generate queries and I can draft draft a synthesized doc. Um, and so I am going to start sharing my screen. And while you pull that up, I have to say people think that why I got into AI in a deep way was because I thought it was so fun to code. And it was actually it made my sequel so much less ugly than it used to be. It was like my number one use case however many years ago. I was like, "Thank God, now I don't have to bother my colleague with my disgusting SQL. I can bother uh AI with my horrifying SQL and it can make it a little bit more uh efficient." >> Yeah. I mean, even just chatgbt for the last couple months has been a j game changer for SQL queries. The problem with chat GBT is you had to spend a good amount of time giving context like the exact table names, the exact field names. And so using I mean it's not its sort of most marketed use case but using cursor which is what I'm going to show today to actually create edit write SQL has been pretty gamechanging um especially because it's so contextaw aware and I will talk about that. So cursor can take like 3 to four minutes to run some queries. So I'm going to just kick off this prompt and then I'll explain the context and what I have done. So while that's running, I will set the stage. Last month in July, we redesigned the signup flow for a new payment method that we have been piloting. And this process of signup is successful when a customer links their bank account uh for the payments. And our old flow had been live for a few months. We had a hypothesis that we could improve it. So we redesigned the flow. Because this is a pilot, we actually like didn't have enough retailers or or users. um to run an AB test. So I just needed to do a pretty straightforward, you know, how was this performing before, how is it performing after? Um historically, again, that would have meant a lot of digging through documentation or more realistically just pinging an engineer to ask questions like, okay, what did we build? Who sees it and why? What front-end events are emitted that I can use to analyze this? Um, and while I do work closely with our engineers during the endspec phase to like figure this out, those details are easy to lose track of, especially like we're often coming back to analyze things, you know, weeks or even months after the feature launched. I will say that I probably would start with notion AI context building similar to Tim, but we already showed that. So, I'm skipping straight to the codebase. And if we go up to this prompt, um, my prompts are way less pretty than Tim's. I don't like spend a lot of time on them. I feel like with cursor, you can always iterate. And so I wanted to understand the setup wizard, which is what we called this new flow. I told it to research our codebase. And I essentially asked who, what, where, when, why. And so if we go to this answer, we can see, okay, it is, you know, looking into the codebase. And you know, I'm not an engineer. I don't really know what this means, but it, you know, we called this in our code the first run user experience. And it tells me about some flags, cannot be sub users. There's like a lot of detail here. Um, and it's telling me when users see this flow, what happens during the flow, the order of steps that happen. That's like pretty important. If I'm going to analyze a funnel, I need to know like in what order did things happen? and then if there is a success event like when the setup is complete and then it gives me a bunch of events that I can use to analyze it. So this is already such a gamecher like in the past I would have leaned on secondhand sources like notion uh to piece together how it was built with cursor like you were saying I can go straight to the source and have it translated into natural language and that just gives me a lot more confidence because it reflects what's actually live not what someone remembered to write down. One thing I want to call out while you're going to your next step is one of the steps that I see skipped by engineering teams is good event tracking when they release a feature because you know you you start up front in the PRD and you like define a tracking plan and then it gets to implementation and people forget should be a front-end event should it be backend event and one of my favorite follow-up AI tasks after something has been released or it's in code review is I do a quick prompt and I go is this is everything appropriately tracked in this feature >> and I get either cursor or Devon to go in and put in all the right events and make sure that the schemas are normalized. So for all the data analysts out there, be annoying and do a PR for your own uh events on new features so you're not, you know, stuck with what the engineers built for you. that inspires me to I can take the endspec and just put it into any AI tool and say what front-end events do I or what events do I need to ask for to be able to measure the success of this effectively. Um because right now I'm just doing that in my head. That is not something that I have. >> Yeah. Don't do it in your head. That's the subtitle of how I AI. How I AI. Yes. Don't do it in your head. So uh with this next prompt um I again not the most like sophisticated prompt. I'm just saying I want to understand at a high level how this feature has been performing and I give the quick context of you know our goal is to make it better. That's pretty obvious that I just want to spell that out. And I like Tim giving a fair amount of discretion to the cursor agent. I'm saying okay come up with the ideal output fields. I have some ideas but like you know it's up to you. And then two I do find that telling it explicitly to create a file. It sometimes forgets to do that and just writes the SQL directly in the um conversation sidebar. Uh use the MCP connection. Like I went through all this trouble to set it up. Uh I want it to use the Snowflake MCP connection and then actually QA the file. And that's what's so powerful about this cursor agent and the snowflake MCP is not only is it writing the SQL, which is what chatgbt has been doing for me for the last year, it is running it, looking at the output and then making like its own sniff test sense check decisions which is just so cool. Okay. Okay. And then another thing I want to call out as we are running this, the reason why I have a fair amount of confidence that this is going to work relatively quickly is because I and our data team have done a fair amount of work to create what's called a semantic layer. And so uh first our amazing data engineering team like six months ago decided we were going to create like a general company semantic layer. And a semantic layer is essentially just a translation for an LLM of like our business terms, tables, fields, filters, metrics, etc. And AI can look at those files and understand like what our tables mean. This general one covered like our mostused generic tables, orders, items, users, etc. Um, and so they connected it to a custom GPT and anyone in the company can go ask pretty basic questions like what was the average order size in Europe last year and get an answer really quickly. And so that's been a huge unlock to save our analytics team time of like we're not answering these questions for people. They can self-s serve. It's just democratizing data and you know saving us a lot of time so that we can focus on more deep analysis. And for deeper analysis like we needed something more than just these basic tables. And so I with a lot of help from one of our data engineers she built a specialized semantic layer just for like my scope as a test. I was you know we're the I was the first one in the company to do this but we're planning on kind of rolling it out to all of the areas of scope. And you know basically this semantic layer just defines the tables that I use the most. joins the filters, the metrics, and because it lives in our codebase, it's like in our data science repo, cursor can just tap into it, and it just makes the zeroot ability like insane of running SQL. I've seen a couple of these and yeah, I don't know what yours looks like, but they really just look like defined terms tables. This table means this, this field means that. If you're trying to query average order value, this is how you do it. And it's almost your documentation in a little bit more of a structured form around common queries. And what I think is nice about this is its ability to be managed by code. You can change it, you can update it, you can add new things. I also think for the data engineers out there, it reduces a little bit of needed complexity on the data warehouse setup because previously you were creating these like aggregate tables and these like defined metrics and you're hoping people were writing queries the right way and now you can deci define these canonical queries and know that no matter kind of like what your tables look like, they're going to get to to the right answer, which I think is quite nice on the data engineering side. >> Yeah. So this is an example of like what you were talking about. It's just a very structured JSON file. And from what I understand, I did not do this, but I had the engineer explain the process to me. And honestly, LLM's helped a lot with creating this. You know, he fed in details about our data warehouse and just a million queries that I had previously written and it kind of helped spit out this type of thing. He also used lang chain to like change the names of a bunch of the reports that we had into question form because obviously when I'm querying this whether it's through a custom GPT or cursor I'm often asking a question and so I thought that was pretty cool like translating it to a question makes the semantic layer work so much better. >> Oh this is going to be my next project. This is so fun. >> Oh amazing glad to inspire. So to go back to the actual SQL that was run and I will actually just run this. See hopefully this and just in case people missed this you did call out the Snowflake MCP which was what we're seeing right now which is a programmatic way to hook into running queries in your Snowflake data warehouse. So you can not only generate the SQL here, but instead of like copy and pasting it and going to like Snowflake Cloud and running it or whatever your visualization tool is, you can just run it right here. You're getting your tables right here. So again, like you're you're eliminating that context switching, you're eliminating the copy and paste, and you're getting your data right here. Yep. Exactly. And so I am Oh, this is interesting. This actually I am looking at this and it's I think it showed a mistake. Um but you know, I asked it to queue. QA itself. Normally this has done does a very good job. But one of the quick QAs that I do for something like this is I want to see no skip steps. Oh, actually you know what I remember from the context this is a temporary um >> this is a step that only some people see. But usually when I'm looking through this, you know, in a in if we were not doing this demo, I would spend probably a lot longer QAing this. But I just want to see drop off that makes sense, right? like I don't want to see 0 0 and then one or then zero and so that's just a quick QA that I can do you know it's not the AI's name on this analysis it's mine so I do that the other thing that I have done to really make sure that I can QA this effectively is I in my cursor rules I tell it to comment every single CTE so that I know what the and sorry CTE are like sections of SQL that often are created when you're writing SQL and I just want to know each step of what is happening so that as I'm looking at the SQL I can say okay the agent said it's doing this and like looking at this code I can actually tell that it's doing this so engineers cover your ears because engineers hate hate hate hate hate when I say this they hate it I love over commented AI code and let me tell you why because when you are not writing this code you really need to understand the thought process behind how the code was designed and having AI comment the code that it writes gives you a natural language way to understand if your understanding of the implementation matches the actual technical implementation of the code itself based on the AI's own reasoning fine delete it if you want to I don't care I know all the arguments against over commented code and I think there's a lot of benefits for human review and it's also great context for AI when they go back and work on it. So engineers, you can now uncover your ears. You can yell at me on Twitter if you want to or an X if you want to. But I do the same thing where I say go ahead and comment in the code so I can understand how you've decomposed these step by step. Yeah, it's pretty pretty awesome. It's also I even have a custom GPT in chat GPT to comment code I've written before. I just insert code and then, you know, if I'm ever handing off dashboards to someone, I really don't want anyone to be so confused that they have to bother me. You know, my goal is to have it be quite self-s served. Look, those lines of code are not going to expand themselves. Let's get some comments. >> This episode is brought to you by Brex. If you're listening to this show, you already know AI is changing how we work in real practical ways. Brex is bringing that same power to finance. Brex is the intelligent finance platform built for founders. With autonomous agents running in the background, your finance stack basically runs itself. Cards are issues, expenses are filed, and fraud is stopped in real time without you having to think about it. Add B's banking solution with a high yield treasury account, and you've got a system that helps you spend smarter, move faster, and scale with confidence. One in three startups in the US already runs on Brex. You can too at brex.com/how I AI. So I'm going to kick off my next my next prompt. Uh but basically like we're going to skip ahead a couple hours here because um up until this point like my goal was to get this kind of clean base query that I could use for dashboards in mode which is fair's BI tool. You know, a lot of what we are doing as the strategy and analytics team is creating creating tables that then can be used for pretty charts to tell a story. And so let's pretend that I spent a few hours with cursor like refining queries. I actually did one for the old flow and the new flow. I actually did do this. This is also a real use case like Tims. Um and then I built some visualizations in mode. What's really cool is that there is actually a modem MCP and I can tell it to view a dashboard directly. For those who are listening here, we have on the old on the left hand side our legacy flow and on the right hand side our new flow. Um you'll see that there's one step that is only present in some of the uh some of the entry points. is a split by entry point and basically it's just showing you know like what is the overall success rate and success rate by step for each of these flows and so this is what I have pointed the mode MCP towards um in this in this prompt so if we go back to this prompt and I'm just going to tell it to run this tool. Okay. So, I'm telling it again like, hey, go look at this mode dashboard and use this MCP. I also give it the direct SQL that's um that I wrote with cursor uh that's powering that dashboard. I'm just asking it for some detailed takeaways and next steps. I give it a little bit of context. Um and I tell it to ask clarifying questions and use the MCPS if necessary. The MCPS, I think I'm not sure if we've defined it yet, but model context protocol, I believe is what it stands for, are like so powerful. I think that that's when this has felt like magic the most. Like at first I assumed that they were similar to APIs where everything needs to be defined. Like some engineer on, you know, both sides needs to go define endpoints that there's a very specific structure. It seemed like a lot of work. these models just like know what to do. It's just wild to me. Um I will say that there's a lot of work on our data engineering side to get some of these MCPs set up. So I think Ben on our analytics platform team has just spent a lot of time on this. Like I I don't want to minimize that step, but as the end user of them, it is like it just feels magical every time it can just access something. And so if we go into the results over here, um, next key takeaways and next steps. Cool. So, uh, we looks like we did a good job. Yay. Fair. Um, and it gives like a pretty detailed, um, list of, you know, the funnel analysis, insights and concerns, actionable next steps, etc. Like this is already a pretty good sort of output to start with. Um, but at the end of the day like analysis like this only matters if you can communicate it clearly, right? Like you need to sort of convince people of whatever you are trying to communicate. So we also have a notion MCP and I'm going to ask cursor to create a doc that captures our findings in a structured way. And I want to pause really quickly because we have done this in maybe 15 minutes where you have taken a problem kind of like a pre and post analysis of a feature change. You have written SQL. You have not used a Wizzywig analytics tool. You have written straight up good SQL traceable SQL to do a funnel analysis of that on a daily basis. Very interesting. You have made a dashboard for it so that your business users can use it. You have then done a metaanalysis of that dashboard using um the MCP to actually read the dashboard, do a first pass analysis, create a summary not only of the results but of recommended next steps and then you are going to publish that to your business using notion. Now I have to say I have worked with a lot of data teams and most of them spending their time saying what is the priority of this analysis? we have a backlog. I need data engineering and fine, here's the dashboard. Like it's like the ones that like get promoted three times in a year that go that extra step where they're like, "And here's the analysis and here are my recommended next steps and I made it pretty so you can share it with your boss." And I just think like I was watching this and I was like, "Oh man, I'm going to promote this data analyst." Like they're pretty they're pretty they're pretty good. And so I just think the ability to level up the quality of your work and think through the interesting things. The interesting thing isn't like did I write this SQL join correctly, right? >> The interesting thing is like have I thought through all the edge cases? Do I have any creative ideas on what we could do next? Can we improve this analysis for the future? And so I really like this endto-end flow because it just shows how you are leveraging up into higher strategic tasks um as opposed to spending your time sort of in the tactics. Yeah, I mean it's I totally agree and we are almost done but um like you said you know we need to we need to communicate this and so one thing that we have done on strategy and analytics is um our chief strategy officer Dan like he really cares about synthesized writing and all the leaders on his team care about synthesized writing and so we worked with him a couple months ago to actually create some guidance on how to write at fair like fair is very much a vertical dock culture you know pre-eread culture. We're not creating a lot of sides. We are writing a lot of docs. And so we have this sort of like use answer for structure key principles doc. And then we also have a template for what docs should look like. And so actually in this prompt you'll see like I tell it to follow these um to follow these rules that are in these docs. And that's like another thing I love about SQL or sorry about cursor is you can just tell it what rules to follow in a variety of ways. >> Okay, Alexa, I'm going to give you an upgrade here which is you should reference these files in your cursor rules so you don't have to answer. >> That's that's a great I should I mean I wanted to you know show the full flow but um the reason I don't is because it would have actually done it in the previous step. >> Oh yeah >> because it would it would have it would have known and then I wouldn't have gotten to talk about it. I will I will do that once we are done. >> It's showbiz folks. That's what this is. >> Um and so the last thing is I am going to pull over the doc. Uh this is one I created from a previous time I did this just because I wanted to highlight in yellow. Um I gave instructions in this prompt to tell me what to add. I think one thing I want to get across is this. I don't think that cursor yet or AI can zero shot like a executive ready doc yet. Like there's that is where I think that we still need to do three to four revs of um of sort of editing, adding analysis, making sure this makes sense. Like these tools have so much context, but we have we still have some context that is just this like janiqua like humans are still valuable. And so this is like a pretty good start. And I think what's cool about cursor is like I cut out some of the middlemen. I got to this point like really really quickly, but we're not just creating like AI slop docs all over the place. We are, you know, just accelerating how fast analysts can do things like this. We, you know, and the other thing that's really helpful about um I would run this through that guidance three or four times. Um, it can be hard when you've been so in the weeds of an analysis to like take a step back and make sure your story makes sense. And so that's what LLMs are really good for. Um, so it can like cover my blind spots. >> Well, you know what's more painful than running this three times through your guidance is sitting three times with your SVP of strategy and having them tell you this makes no sense and you need to go back and edit stuff. So again, I think uh what a what a nicer way to get to a higher quality output than uh yes >> than having to >> it saves me time and it saves the leaders on my team time and hopefully improves the quality you know it's fundamentally >> improving how you know we are doing work on analytics team >> and one thing I want to call out for folks that are maybe listening and not watching is Alexa my friend here is smiling this is fun this is like interesting And it's fun. You're not sitting here saying, "I have no role to play anymore. The machines are going to take over." You're saying, "Man, it was really boring to like dig through tables and write all this SQL that I know how to write and I've done it a couple times, so let's let the machines do it." And now you're able to focus on interfacing with the business, having impact. Um, and it's just I I think it's fun. Every time I get in these tools, I feel like it's magical. I feel like it's really fun. And so I want to call out we got smiles across the board here on how I AI. >> I didn't show this but the type ahead like if when you're actually editing the SQL that's also so fun. It's just fun. It knows what you want to do. Um so yeah this this whole process is very fun. >> I think what's so powerful this is not just like making the good analyst just incredible. It's also democratizing data. So this is something that can be done. SQL can be written by people all over our business whether we're in sales designers anyone else can write this. So the people with the context can do analysis just like this and then the analysts can do the really complicated stuff where these tools could help them get really into the weeds. >> For people early in their career, I've said this before and I mean it to be true. If you want to know the inflection point of Claro's career, it is when she learned SQL. Um tr I mean truly I became unstoppable at that point. And so lowering the barrier to entry on data analysis is just going to create a whole bunch of really high high impact folks. >> Awesome. Okay, Alexa. So, we just saw how cursor can do endto-end funnel analysis all the way to the proverbial front door of your SVP strategy. Tim, let's talk about another kind of analysis, which is experimentation analysis. My favorite. >> Yeah, you should close to your heart. So, look, we've talked about the big picture. We've talked about like a really detailed sort of actual analyst of how they do their day job. But I think one of the other things these AI tools are just so good is just accelerating process like automating away some of those routine lower impact steps in the analytics journey. And so as a good example we want to show you a quick agent we built which automates the process of writing up experiment results. So across fair we might be running I don't know hundreds of AB tests on the product a month and each of those experiments needs to be monitored assessed documented and that just takes up so much time for our analysts. So if we don't stay on top of this very quickly it's our team that can become the bottleneck and slow down our launch velocity which is the last thing anyone wants. And I know this is something that's happening up and down the country around every single tech company. Um so we thought it'd be a good example just to to demonstrate. So um let me show you how I built this. Uh, one thing I want to really really stress here is just how straightforward these things are to build. Like once you've gone through the pain of setting up cursor, getting your MCPs in place, actually spinning up any new agent you can think about is just so quick and so non-technical for anyone to do. So it all runs off a cursor rules file. So if you don't know what these are, they're literally just a type of file, uh, an MDC file that these agents know to look for and know they're likely to contain instructions. um they're really easy to set up. It's basically plain English. So you just write uh a simple uh oneline uh entry of what it is. So format for writing experiment results using EPO data. EPO is just the uh experiment tool that we use. It's basically takes our data, does a bit of analysis, slaps a UI around it, uh and and writes it up for us. Um uh so you then select when you want to apply. I've just selected apply intelligent. I trust the model to work out when it needs to use it. They do a pretty good job. And then other than that, it literally is just writing out what you want the agent to do. Now, this might look a bit complicated. And I'll generally write this in a few minutes in plain text what I wanted to write. I'll ask cursor to then tear the thing down and I'll rewrite it a couple of times and just get it right in the format I want. But ultimately, it's just a step-by-step guide of what I want this thing to do. So, I've just said for those who are listening, I've said if you're asked to write up experiment results, do the following things. So ask the experiment name if you haven't already got it and then go collect the data you're going to need. Uh so use the EPO MCP we've set up. So go talk to our experiment space, pull in the actual results of the experiment and then use our notion MCP that we've already talked about to go pull in all the other context that you might need. So any other documentation that's going to help it interpret that data and write up this report. And then I've got a little bit down here you can see telling it exactly what kinds of um of documents to look for. So, PRDS, experiment docs, technical specifications, that's that's what it's going to help it look for. And then I ask it to basically write out those results in the format I give it. And then I'm pretty prescriptive about the format I want because I want this to do it really consistently in the format we want with really tight um tight takeaways. So, actually, I've asked it to create it in just a local file on my cursor on my computer. And that just means I can actually look at it before it goes create to the notion docs. I can take a peek, refine the prompt if I need to, but that's just a fallback. And then ultimately, it's going to turn into another notion doc so everyone else in the business can see it. And it's going to do all this incredibly quickly. And let's actually just see what this thing looks like in in reality. So let's just run it on uh an experiment result. So I've just said, please write up the experiment results for and I've given it the name of the experiment, which is vertical product tile images. Uh and straight off it's gone off and it's found uh it's written off a nice to-do list. It's found the EPO result. So, it's just called the results. It's found its results. Great. It's found the the rules. And now it's going to start working this all out for me, which is great to see. And then while it's doing all that, we'll just have a look. So, the format we've gone through, uh, we can just show here. So, basically, the rest of this is all just showing exactly what the format this thing is going to look like. So, I've asked it to give me the document links, exactly, uh, what I want. So, if I click into more context, a brief summary of the experiment, uh, and then the key bit, the actual metrics that it's got from EPO. So, it's going to show me the actual results, the confidence intervals. It's going to pull out the most important ones, and it will give me a nice little color coding for it. Uh, and then I just want the actual answer from this. So, I actually want it to do the work of interpreting what we should do next. And so, it's written the takeaway section. So, I want a clear, should we roll this out? Should we roll it back? What should we do? And give me the reasons why, like, why are we doing this? And are there any other interesting insights that you found uh that we should call out from this? So, let's see. Right. So, it's look, let's have a look at what it's doing here. It has found everything we need. It's starting to write out the dock, which is nice to see. Uh, in this little thing, I'm just going to go ahead and queue up. So, turn this into a notion. So, as soon as I've run it, while we look at the actual results, uh, it will start writing the notion doc. And let's have a look. So, straight away in a second while it's running that, I have got a write up with all the right context I need. So, it's got the links I needed. It's got the context. It's pulled the right data. Good. The nice thing is this results. So, this was just literally sharing uh vertical images rather than square images like a really standard growth experiment like which one performs better. And you can see a nice stat lift uh of about 3 and a half% uh for the treatment. Uh and then it's pulled out some other interesting business max. And let's have a look at these takeaways. So, it's saying uh great roll it out. the right answer uh because of that lift and it's also pulled out some interesting things. So it said our data science prediction models are also actually positive. So it's saying not only have we got more retailers actually higher quality retailers the ones we've got. So this looked good as a first pass. This looks great. And just to call out one thing here personally, like we we have a standard format for doing these where you have to type the confidence interval and type the emojis. And that is like work that is not valuable for our team. And so it's pretty awesome that like it came up with takeaways, but it also saved us five minutes of like fiddling around with emojis and decimal points. >> Yeah. I mean AI as a translation layer between a SAS interface or a SQL query into natural language in the format that you like that your boss likes. That's just a timesaver in and of it of itself. So I I love using AI as like the universal format translator. >> So as you can see I've just asked the notion link. It should produce the notion. So let's just open that up and let's put it on the screen. And look, straight away I've got a nice uh document I can share around with everyone with all the right color codes, the takeaways, and even as a little bonus, let's see, it's done. It always has trouble getting things in a little toggle. But right at the bottom here, I've even asked it to spit out a slack with an even more summarized version. So I can just drop this into the right review channels and straight away this can go and get approved. Now, are we going to do this for every complicated experiment? Probably not. There might need to be a bit of analysis, but for the simple ones, straight one shot. Even the complicated ones, this accelerates you. But also anyone in the business can start doing this, which means we can pass more and more of these things down to engineers, PMs, other people to write this kind of stuff and do the analysis for them, which again can just massively accelerate our launch velocity affair, which we're really excited for. >> Yeah, I I'm sorry. And I know this is my brand, but I feel like AI is just acrewing to every task. Sorry, PM, it's your job now. So, uh I I do like that that little trend that's happening. This is amazing. Um, love it. Have done these kinds of analyses before. They have not been this easy to read and they certainly haven't been generated in 90 seconds. Really useful tool for experimentation analysis. A call out to the experimentation tools out there that I know and love. Um, if you have not made an MCP for access to your data, you are limiting your customers. And so I do think sort of AI integration of SAS tools is going to be a way that teams start to evaluate the quality of tools that they're working with. So, just something to think about if you're out there building data analysis tools. Okay, we are going to wrap up very quickly with a final we're going to do a bonus. We usually only do three use cases, but your yours are all so good. We're going to do a speedrun through a bonus use case, which is actually designing and analyzing kind of unstructured data in a user survey. So, Tim, you're going to whip us through how you could use AI to make surveys and survey analysis a lot better. >> Yeah, I'm going to do this really quickly. We're not going to spend time on this, but um let's just show I think it's just another one of those incredibly common analytics use cases that everyone has to do and they are just so timeconuming. You've got to design the survey correctly, code it into a survey platform, then analyze all those results. It's really timeconuming. But end to end AI can just like transform the whole process. Let's show another one. I'm just going to stop. I'm not going to run these. I'm just going to go straight to my backup. So, let's just start on design. So, what I love doing this, I think you can do it on curs, you can do it on many things. I think chat g projects is really good for this and again incredibly accessible. Everyone knows how these work. Uh it's just a great way of giving context. So if we switch over to this one, which chat fat, it's lovely and taking a bit of time to load. You can see in files, what I did was give it a bit of background information. So what is our bit of business? So this was a survey we want to design on fair direct tools. So that's our tools that we give all our brands to help them accelerate their sales with their own customers. And so I've given a ton of information to the model that just says like what actually is fair direct? What are these tools? what's the strategy and then I whenever I do a survey like this um whether I'm doing AI or not I'll start with hypothesis that that's ultimately what you want to test and so this is nice if I just open up those hypothesis so this is what I fed it into I just gave it a list of simple hypotheses on what um what we want to learn we do aligned we got everyone aligned on some hypothesis there's 14 in here and they're really simp I'll just call out one like um higher sales on fair leads to u more usage of these tools um things like that that we asked now I've just given that into it and all I did if I just look at this prompt that we ran so this was a simple prompt all I did was drop it in saying you're a you're a specialist at doing these customer insight surveys run design me a 10-minute survey for the thousand brands to test those hypotheses I said these are the inputs I've given you here's a bit of design requirements that we want and I asked her three things I said turn those hypotheses into a full questionnaire that we can ask our customers, but also don't just do that. Give me the coding file that turns that questionnaire into the actual, in this case, Qualrix, the platform we use to actually run these things. Can actually design that straight away in one click and give me an analysis plan for some what to do with it. >> I have to pause you really quickly cuz this whole episode has been Tim saying, "I just did this really simple prompt and then you see this like 1,000word hyper ststructured, very organized prompt." And Alexa is like, "Oh man, I would just go in there and be like, "Maybe a nice survey, please." >> I love it. So, I'm a big believer that 99% of my problems are going to be one line. And then if I'm going to send a model, a big model is to go do work for 15 minutes. I'll probably ask another model just to turn my one line into something more more detailed. >> I want I want the AB test of Alexa, you run this exact same GBT with a tinier prompt, and you tell me if you get the same quality. See what happens. See what happens. Baby, I'm just I don't trust it quite as much as Alexa does just yet. Okay. So, what do we get from that? So, very quickly from a list of hypotheses, I've got straight away a really nice first pass of a survey. Now, it's going to ask a load of questions. It's about the right length. Like, this can just massively accelerate the process. And then once we've got that right, it's also given me that coding file, which I'll just scroll on screen. These things are painful to write. So just having this a oneliner to tell exactly how the system should prompt this and write it out is just like saves hours of time for our research operations team. And it even then translates that into an analysis plan that says this is what the outputs from that are going to look like. So straight away this whole thing can go from a list of hypotheses into something we could probably get out to our customers by the end of the day. Now that's like shortens this enormously. But what happens when you get the results back? That's the other thing this can do. And so again, I'll do this incredibly quickly and just show you the final result, but I did a very similar prompt as well. So all I did, I'm going to show you the file I dropped into this just show you how painful this is. So I just gave the same hypothesis and look how bad this like it's the raw output from Qualrix. Like these usually take a lot of cleaning. It's one line for every respondent and then one column not just for every question but for every possible answer to every question. So these things are incredibly dense. uh for anyone's worth them and they take a bit of time, a bit of playing with. So the only other thing I gave it was a a sort of helper file which was basically that u sort of coding file that I just showed you. So it's the what's the question ID, what's the question language, what's the answers and then is it I just add these two columns which is like is it a demographic question or an answer and is it a single choice or is it a multiple choice? That's all I gave it. And then I've written another one of my uh fun and simple prompts. Um so uh same roll task here just analyze the survey results give find the right most interesting things in this data and then judge the predefined hypothesis. Um uh so I want a table that basically says like for those hypothesis was it right or was it wrong. Uh and then again I always end on little qu check. I don't want it to go away 15 minutes before uh and come back with something that isn't very useful. And let's have a look at this just very quickly. So, I've got a nice little summary out front. And then there are my 14 hypothesis. >> Oh, >> and it's got a nice table that says proved, neutral, disproved for each of them. And it's even because I asked it to, giving me a nice confidence score. So, I said one, it's really confident in this five. It's not very confident at all. And you can kind of see the different levels throughout this. And then beneath it, I've got for each of these actually the specific analysis that I asked to do. So, just throw all the insights I found to back up those findings. So, like, is this the only analysis we're going to do on this survey? Like, almost certainly not. But day one, I've got the results. I've thrown it into this and within a matter of minutes, I've got a much much better intuition of what all that day is showing. So, while I might go and do some analysis on this, I can be so much more targeted on exactly what we want to what we want to look into and where I want to spend my time. Uh, and straight away, we can start sort of sharing some of these findings out with people very very quickly. >> Oh, no. So at I'm reflecting now after this episode like okay I've told everybody to ship a bunch of features and now I'm going to be like do a bunch of analysis like in my mind I'm like oh my gosh I'm underusing AI to actually understand my business and it's so accessible and if I can just write 17 point prompts like Tim I can get really high quality insights. But I do want to call out uh just reflecting on this whole episode in your four workflows. What I love about what you're showing us is so many people think that AI is an input to producing a thing, but haven't done that that full circle back to analyzing the thing, sharing the thing, communicating about the thing. And I think you're showing both sides. You can create with AI and you can analyze and communicate with AI. And I think looking at both sides of that coin is really useful. Okay, we are going to do the one and only lightning round question because we have gotten long on this episode and I want to get you all back to all of your agents and MCPs and analysis. We're going to go back to prompts one last time. We're going to figure out your personality around prompts. Alexa, Tim, when AI is not listening, when your NCP will not call the tool, what is your prompting technique? Alexa, what do you do? I think mine's pretty straightforward where I think the problem that I run into most frequently is that I'm clearly running out of context. Like a conversation has gone so long that it's starting to be wonky. And so while I think you know level one is just starting over, uh what AI is best at is summarizing. So, I'll say, "Hey, summarize like what we've done so far in this, you know, 30 turn conversation and then use that to start over." Um, because you know, like, like I've heard other episodes people say, "You want to figure out like where it got off track." Clearly, I'm a pretty efficient person. I don't, you know, I'm not Tim. I'm not like writing out the entire prompt for 20 minutes. Like, I don't have time for that. I just want to say, "Hey, summarize what happened. We're going to start over, but I'm going to give it that summary." So at least the new conversation can get some context from the old. >> Great. And Tim, what about you? >> So much state for my prompts. It's all AI. It's all AI. What my work chat did? So um I generally will go and open up three windows on cursor and I'll do three chats with three different models and put the same prompt in and go up a cup of tea and see what comes back. >> That's the British stereotype in me and getting my cup of tea while I do it. But >> yeah, you run the AB test is what you do. Okay. Uh I I love this. Tim Alexa, where can we find you and what can we be helpful with? >> You can find me on LinkedIn. My full name is Alexandra. And uh ways to be helpful. Our strategy and analytics team is hiring across the board. Our team partners super closely with PMs and our go to market team. We make strategic datadriven decisions. Super fun. We have tons of open roles. So if you like experimenting with AI, we are very AI forward. So you can learn more at fair.com/careers >> and you can find me on LinkedIn as well and I'd echo that as well like come join us if you love AI. Come join us and show us how we can do it more here. >> Okay, we will link to your careers page in the show notes. Alexa Tim, this has been so fun. Thank you for joining how I AI. >> Thank you for having us >> for having us. >> Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube or even better leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show at howiaipod.com. See you next time.

Summary

This video demonstrates how Faire's data team uses AI tools like Cursor, enterprise search, and custom agents to accelerate product analysis, from identifying issues to generating insights and reports, making data analysis more accessible and efficient for non-technical teams.

Key Points

  • Faire uses AI tools to transform data analysis by automating context gathering, SQL generation, and report writing.
  • The team leverages enterprise search in Notion to quickly identify potential causes for business metric drops.
  • Cursor is used for deep research in codebases to understand feature implementation and impact on user experience.
  • AI agents are built to automate routine tasks like writing experiment result reports, saving analysts time.
  • A semantic layer is created to help AI understand business terms and generate accurate SQL queries.
  • Custom agents can be built using Cursor rules to automate complex workflows end-to-end.
  • These tools democratize data analysis, allowing non-engineers to perform deep analysis without writing code.
  • The approach enables faster decision-making by providing timely, structured insights to leadership.

Key Takeaways

  • Use AI tools like Cursor and enterprise search to rapidly gather context and identify potential issues in your product.
  • Build custom agents to automate repetitive analysis tasks like writing experiment reports.
  • Create a semantic layer to help AI understand your business data and generate accurate queries.
  • Leverage AI to analyze your codebase directly for forensic insights into feature impact.
  • Use AI to generate structured reports and summaries that can be easily shared with stakeholders.

Primary Category

AI Tools & Frameworks

Secondary Categories

AI Engineering Data Engineering AI Business & Strategy

Topics

AI tools data analysis enterprise search custom agents semantic layer Model Context Protocols Cursor ChatGPT SQL generation automated reporting experiment analysis survey design context engineering AI orchestration

Entities

people
Tim Trueman Alexa Cerf Claire Vo
organizations
Faire Zapier Brex OpenAI GitHub
products
technologies
domain_specific
technologies products methods

Sentiment

0.85 (Positive)

Content Type

demo

Difficulty

intermediate

Tone

educational entertaining technical inspirational professional