How this PM streamlines 60k-page FDA submissions with Claude, Streamlit, and clever AI workflows

howiaipodcast A204lKJryoQ Watch on YouTube Published July 13, 2025
Scored
Duration
45:13
Views
7,412
Likes
137

Scores

Composite
0.53
Freshness
0.00
Quality
0.84
Relevance
1.00
8,092 words Language: en Auto-generated

So you are working in the life sciences on really health impacting vaccines and treatments and with all that amazing investment in scientific work also comes a lot of paper work. We had to develop a nearly 60,000page document would have taken about four to six months of effort and nearly 20 specialists not to mention the millions of dollars spent. Where did you start? I gave Claude the problem statement, the pitch and a demo. The thinking I had in mind is that Claude is a software engineer and I'm talking to them and trying to tell them why it matters like any good PM would and trying to tell them what end product we want to produce as a result. And not only did it fully understand what I was asking it to develop, it created a little setup instructions markdown file for me and it gives me all of the setup instructions and all of the tasks that it does and its capabilities which is super handy. >> Were you really able to shift the cost there? Did you see an impact here? >> We did. We did. And any kind of cost savings you can generate have a direct impact to the bottom line. And in addition, if you're saving on time, you're bringing life-saving vaccines in the hands of people who actually need it. Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive here on a mission to help you build better with these new tools. Today we have Prair who's taken years of machine learning experience and helped develop some of the products you know and rely on every day at companies like Amazon Alexa, Moerna and Panasonic. She's going to show us how she uses AI to accelerate drug discovery and vaccine approval. Something that takes tens of thousands of pages of documents, dozens of people, and months and months and months of work. She'll also show us how you can use Jane Austin, Dale Carnegie, and some AI to manage your stakeholders just a little bit easier. Let's get to it. This episode is brought to you by Code Rabbit, the AI code review platform, transforming how engineering teams ship faster with AI without sacrificing code quality. Quality code reviews are critical but timeconuming. Code Rabbit rabbit acts as your AI co-pilot, providing instant code review comments and potential impacts of every pull request. Beyond just flagging issues, Code Rabbit provides one-click fix suggestions and lets you define custom code quality rules using a GRP patterns, catching subtle issues that traditional static analysis tools might miss. Code Rabbit brings AI powered code reviews directly into VS Code, Cursor, and Windsor. Code Rabbit has so far reviewed more than 10 million PRs, been installed on 1 million repositories, and has been used by 70,000 open- source projects. Get Code Rabbit free for an entire year at rabbit.ai and use the code how I AI. Pria, it's so nice to have you. I'm really excited about your workflows. Thank you so much for having me. Very excited to be here. So you are working in the life sciences on really health impacting vaccines and treatments and with all that amazing investment and scientific work also comes a lot of paperwork if I am if I am correct. And so in addition to doing all this amazing research, you also have to document and submit a lot of stuff to the FDA and other regulating industries and I am sure that is a total pain. So tell me a little bit about the problem that led you to building some of these AI workflows. Well yeah firstly thank you so much for having me and uh you're totally right. uh with life sciences. I think the challenge across all parts of the organization is uh a lot of paperwork, a lot of regulation and uh going back and forth with these regulated industries could take months of time. Not to mention all of the people resources involved and delays in getting in some cases lifesaving drugs and vaccines to market uh that people need. For example, when I was working at Mona, we were developing the INT cancer vaccine which uh is necessary for uh folks with skin cancer. It was sort of a new niche concept that had come out and we had to develop a nearly 60,000page document which is known as the it's it's a technical jargon but biological license agreement which took around four to six it would have taken about four to six months of effort and nearly 20 specialists not to mention the millions of dollars spent across the organization and getting this document together and providing it to regulated authorities across Ross the world very labor intensive very time consuming and uh that's where I kind of came in I was a geni product manager there and really came in to think through how can we reimagine this using generative AI uh this was back in 2023 where I came up with the solution so a couple of years back uh and uh I kind of evolved it over time to kind of specialize in uh certain areas as well and I'd love to show what we came up with >> so it sounds like you created a really high impact tool in 60,000 pages is pretty pretty intense. Um, what I love about this workflow is you're going to show us actually how you use Gen AI to build this Gen AI solution. So, like all good product people, you started with requirements. Can you just show us your process from you you've identified this problem which is we got to create this massive document and it's going to take millions and millions of dollars and tons of people to actually having this production thing that that your team and colleagues could use. So where did you start? >> Like every good PM started with the requirements and uh I actually didn't uh hadn't fully formed uh all of the ideas initially because um I think that with uh with the complexity in the domain knowledge required it took a few iterations too but I'd love to showcase sort of where we started with very simple pitch and then go through how we actually evolved the idea as well. So, so I I gave Claude uh the problem statement, the pitch and a demo. The thinking I had in mind is that Claude is a software engineer and I'm talking to them and trying to tell them why it matters uh like any good PM would and trying to tell them what end sort of product we want to produce as a result. So, I've included statements such as you know this is Stefan Barcel's uh who is Benner CEO's passion project. He's very excited about it. here is the cost savings you can get. Here's the here's why it matters in the impact and then also spoken about the demo that I want to showcase and how we could sort of iterate and build on it. And Claude came back to me with um not only did it fully understand what I was asking it to develop, it created like a little setup instructions markdown file for me and it gives me all of the setup instructions and all of the tasks that it does and its capabilities which is super handy. Not only did it do that, it also gave me all of the code and all of the different aspects to the to the code as well. So I kind of started here and started thinking through what what do I really need to do to make this a production ready system and there were two things that were extremely important in the process. First is because the BLA is a structured document it is XML based it has certain formats I needed claude to successfully in a very strict and structured way take data and structure it into a certain format. So that was a very strict requirement and that's not super easy to get with some of these models as well. The second was that because it is clinical data, it's highly regulated and being able to detect PHI redacted information was extremely important so that it could correctly redact that and then not have that included in the actual BLA that we would be sending to any authority because it's patient data essentially. So the reason why we chose Anthropic Claude is because of the reputation around safety and alignment with human values as well and uh we formed this partnership with Claude and really focused on some of the PHI and the safety aspects to make sure that uh not not just from a compliance standpoint but also as we think about alignment we were taking care of those use cases. So that this is what it came up with. Did a really good job and I was able to get uh some code running as well. Uh and I'd love to show you what it came up with. If we can go back to the requirements really quickly, there's just a couple things I want to call out for folks that I think is really interesting. If you go to the step-by-step instructions, what I love about it is if you go to the top, it not only gives you sort of how to technically write the code, but something I wanted to call out at the bottom is it actually gives you a pitch narrative to how to run your demo. So, what I think is so interesting about this specific prompt is you gave it kind of pretty loose requirements. It's only a couple paragraphs up top and it gave you a whole set of things. I'm sure many of which you weren't anticipating. I wouldn't have anticipated getting a demo script from from Claude when I'm asking it to build Python, but but it gave you that. I thought that was really interesting. And then if you scroll down in the left side, something else I'd like to call out for folks that I think is really interesting is it gave you these kind of like component names of different technical pieces of of the product. So like coming up with mock data, doing PHI reduction, all that kind of stuff. You know, you as a software engineer and a product person, how long do you think it would have taken you to come up with this full set of just this first oneshot prompt requirements before? >> At least a couple of days, at least about a week to really think through everything. And then in addition to knowing you wanted to work with the anthropic models because of safety and alignment, were there other libraries or other models that you knew you wanted to use? Did it pick the right ones? Like how close to the technical implementation did it get? >> In particular with the with the redaction piece, it would have taken a couple of data scientists to figure out the right solution and work with us. Uh so data scientists as well as software engineers to work with us and figure out what's the right solve for this. How do we implement it? How do we test it against certain synthetic data? So I think where it really excelled is in picking the right models and I have some amount of machine learning knowledge but it would have taken me a couple of days of study to figure out okay this is the right model for medical named entity recognition and this is how you go about it. But it did such a great job with very little information up front on the domain itself. it it was able to absolutely pick the right models and give me the code and then implement it and show me the confidence score across that as well. >> Yeah. And if we look at the code, I just have to laugh because you know many many years ago I think I have lost hours and hours and days and days of my life to writing reax >> just I I used to spend all my time writing reax terrible that didn't work. It was just awful. And so one of the kind of miracles of AI I see is it reduces this toil in technical tools you know you need to use that have documented patterns but they're just so hard to research and get perfectly right out of out of the gate. So this was my favorite part of seeing seeing your code up is seeing all that code just pre-populated for you. >> Yeah, that's a such a common and shared pain point. Reax has been a Yeah, it's a bean of any software engineer's existence. >> Yeah. So, okay. So, it created this Python script for you and you said you scrolled through it and it looked, you know, pretty correct and then you decided to deploy this on Streamlit. Can you tell us a little bit about that choice, why you picked that? >> Yeah, so I love using AI to solve for some and automating my own work and solve for my own sort of uh day-to-day problems either be it related to work or something I'm trying to work on per in my side projects or personally. But I think the real unlock at least I found is that if I can create something of value for other people and then uh especially as they are non-technical folks um making a UI and making something generally available and hosting it locally has proven to be a really really big unlock both career-wise and also you know working with different colleagues they find that to be really valuable because they might have something in mind and I have all these skills and I can I can sort of offer that. Uh so that's actually something I've been doing in my current organization as well where uh I recently created a Google add-on to convert PRDS into Jira format and uh that was a lot of fun and you know the product managers especially were very excited about it uh because uh they don't necessarily like having to write requirements twice uh one in a PRD and then also in Jira um and uh we we found a good uh way to kind of circumvent that and do it in a matter of minutes. So, I did that and then the Streamlit idea I also did in a couple of my previous roles and my current role as well to give non-technical users access to all of these tools. >> Yeah. And just for folks that don't know, Streamlit is a really easy way to put a pretty simple UI on front of like a a Python kind of file and set of APIs. >> Exactly. So, Claude created your PRD. It gave you code. You were pretty happy with the code. How do you get from us looking at this code right now to a product that your team could actually use to create these documents? >> I'm going to run for you uh the actual Streamlit app. >> So, I'll take you to terminal. I essentially run it from the command line and it opens up a web app. It takes me here. It has preloaded an API key from cloud which I provided in the code. So I've hidden some of those properties and uh I'm going to generate synthetic data for us and I'll talk through and I'll go through the step as well because it takes a bit of time and I'll talk through what it's done. So uh as I mentioned uh there's this format called a common technical document as part of the BA. There are several modules about five or six different modules and it has a structure around it. So there is there is a study report that has a tabular listing. It's supposed to have uh very specific uh points across the data from the clinical trials and then different literature references as well. >> Did Claude know about this format kind of from just general knowledge or did you have to provide the format to it? >> So it searched up uh the common technical document uh format for me and then it figured out the right module to apply to my use case. >> Great. That's awesome. Yeah, it's doing pretty well. So, I'm running here. What is I'm going through the different we have different participants here and uh there are some clinical notes per participant there. This is all clinical trial data very very uh sort of common that you would see in in any trial that runs and by the way the trials itself they could run in the order of billions. So redoing a clinical trial is is very very expensive. And so as as much as possible, we want to give very accurate uh information to the FDA so they don't ask us to redo anything. And what I've done here is I've deidentified some of this patient data uh to remove any sort of names, date of birth, any information that could be associated with the person. And one of the challenges I'd love to call out for folks that maybe aren't seeing this is you generated some fake data so you can validate it works. And it's not only this tabular data that has patient ID, ethnicity, whether they were in the control or otherwise, but you also have these free form clinical notes where there may be embedded PHI written in by a clinician or a um study administrator. So it's not as simple as saying just find all the data birth columns and redact them. You actually have to go through this unstructured data and find potential sources of PHI. Is that part of the challenge? >> That's exactly it. It's a it's a complex problem. Usually you know machine learning doesn't always do a perfect job there but uh Claude immediately identified the right model to do this. >> Great. So okay. So you have this clinical data preview and then you're going to show us how you deidentify it. So you're not sharing that with folks that don't have access to that health data. >> Exactly. So with this synthetic data, I clicked on this button to detect and reduct PHI information and it scanned through all of the rows and it found for me all of the PHI. So here's what it came up with. So see in the clinical notes uh it it found that there were some dates names uh associated there was some birth date information as well and the remaining it was able to keep de sort of in a de identified way so that I can include it in my reports. >> Great. And then how would this have been done in the past? I think you said there are software engineerings or ML um you know engineers that would have built these models to redact the data. Was there any human in the loop at any point in this as well? like how how was this done before? >> For the past couple of years, we've been um most pharma companies have been using uh machine learning for this. But these these projects they end up because because it is PHI and there's very high bar, they end up taking months of effort uh to get right and especially when clinical trial data um might not look the same for every drug. As a company like MOA was trying to scale, we were trying to launch nearly 50 drugs uh bring 50 drugs to market um um in clinical trials and then 15 to market in the next 3 to 5 years. Um that problem was exacerbated. So thinking about coming up with new machine learning uh models to solve different problems across each of these vaccines, it's it's a it's a matter of scale and time really and uh we we really needed to find a better way to do this. >> Great. Okay, so we've got redacted health data. You have all your great, you know, billions of of patient data points. What's what's next in this in this flow? >> So the next thing is to generate this common technical document. And I'll I'll show you a small preview just for the purpose of this demo. So what it did is it took all of that and it gave me a summary of the different data. It was able to give me a synopsis which is what the FDA looks for the study participants, the counts, the age, all of the statistical data points as well as um some of the specific medical terminology that they're looking for. So it kind of summarized everything. it gave me this and for for context for folks uh this is typically done by a medical writer and their their whole I mean I've worked very closely with them it's so time consuming because they not only they they can do a pretty good job of generating a like a first draft summary but they have to go back and forth with their stakeholders for weeks sometimes on a single module to come up with the right set of criteria and getting everyone to align all of their stakeholders to line is is very very time consuming. uh 20 specialists as I mentioned are working on it at any given point in time and as part of what uh some of these uh the chat GPS of the world and uh anthropic can provide there are also common projects and common environments where all stakeholders can kind of collaborate and work together on these summaries which is so much easier and so much better than working let's say in um in a sort of a shared document and just providing comments. >> Yep. Great. Okay. So it generated from that data this very stri I mean it looks official to me I'm not I'm not part of the FDA but you know it's a summary of statistics it's summary of the methodologies summary of the impact um it's pretty long and detailed it looks like you can download it both to a TXT form like a doc form as well as this as you mentioned proprietary XML format that's required um and I'm presuming the claude code just gave you two buttons to to make that That's exactly it. And I know it's a little bit proprietary, but I cannot tell you how excited I was to see this button. >> This episode is brought to you by Lovable. If you've ever had an idea for an app, but didn't know where to start, Lovable is for you. Lovable lets you build working apps and websites by simply chatting with AI. Then you can customize it, add automations, and deploy it to a live domain. It's perfect for marketers spinning up tools, product managers prototyping new ideas, or founders launching their next business. Unlike no code tools, Lovable isn't about static pages. It builds full apps with real functionality. And it's fast. What used to take weeks, months, or even years, you can now do over the weekend. So, if you've been sitting on an idea, now's the time to bring it to life. Get started for free at lovable.dev. That's lovable.dev. Okay. So, if we could just sum take a halfway point of the podcast, the two things you were most excited about and I was most excited about. I was excited about the rejax. You were excited about the XML. This is a a very nerdy um and very specific uh how I AI podcast. Okay. So, you've generated the doc and then I love this last part. So, let's talk about what's this last piece of of the product. What's super interesting is u I I have worked in a couple of different organizations where as we are trying to scale different AI use cases cost comes to be a factor because um not just token cost but also uh acquiring licenses for these models and then running it and as the number of consumers or the users of the actual use case scale the dollar cost also scales uh proportionally. So uh one of the ways in which we could potentially continue to launch and continue to build products around this is by by actually being transparent about the the cost associated with the operations and um you know generally uh one of one of the stakeholder arguments that I've previously received push back is this is going to be high cost. Can we find a simpler way that's not not AI based? And uh how I've tried to solve this is actually being transparent about the cost associated with it, how long it would take. Um and as you can see this trace and cost analysis is super useful because it says cost per operation. It gives a duration against it and then it actually breaks down by operation uh what was the token cost and the total cost. So extremely useful. You can continue to track and monitor this in production as well. >> Yeah. So then you can say for example, you know, PHI reduction per patient costs 15 cents and you can decide as an organization is that more or less expensive than alternative models. Uh that's I mean I actually haven't heard anybody give this uh this suggestion so I want everybody to hear it which is if you're getting internal resistance to cost. In fact I I feel this even as a um even as a solo founder who's vibe coding things and I see Claude code give me like this cost you $5.13 and I send that to people they're like that's so much money. I'm like it's it's $5. And so I love this idea of bringing true transparency and I'm sure these numbers are much smaller than your production numbers on your, you know, million person studies, but it does bring a true sense of ROI and investment here that I think can get you through internal hurdles. >> Yeah, it it really has helped and uh I I really love stream for this because it's it's tracking everything by per operation. So we can even plug and play very easily. >> This is awesome. And so you built this and are getting it in the hands of folks that need to submit this. And what do you think the actual, you know, you were telling me millions of dollars, months and months and months, tons of people. Were you really able to shift the the cost there? Did you see an impact here? We did. We did. And uh we actually applied uh some of this uh thinking to our uh cancer vaccine the INT vaccine that's coming out and uh our RSV co RSV combined vaccine as well. Both are past their clinical trials and I think we're looking at commercialization there. So we found that cost savings was definitely one thing but um the stakeholders themselves um were super engaged in the process and they built a system that scales pretty well as the company scales as well. So that was the most useful thing is we're saving on repeated work where if one thinks about uh I'll just give some numbers out there. So billions of dollars in costs per trial. Uh three clinical trials per vaccine. Want to bring 15 vaccines to market. Uh that's that's probably in the order of a couple of hundred billion. I mean that's that's a pretty significant number. And usually most companies don't have that sort of cash. And any kind of cost savings you can generate have a direct impact uh to the bottom line. And in addition, if you're saving on time, you're bringing lifesaving vaccines in the hands of people who actually need it. So, uh, I was really motivated to kind of kind of help here. Well, this is such a great use case. I love that you showed going from Claude to Streamlit app and then from Streamlit app to real impact on something that I think, you know, as a mom, RSV and COVID vaccines. I I thank you. So, the sooner you can bring those to market, the happier I'm going to be. Um, so this is this is amazing. I know you have one more workflow for us on the product management side and you teased this like PRD to Jira thing. So I know you're thinking a lot about just product management generally and AI. So you want to show us our your next workflow? >> Absolutely. So the the problem statement here is that uh with with product managers we in our uh day-to-day roles end up working with a lot of different stakeholders who have uh various priorities as when it comes to the product and I personally I I really want to make sure my stakeholders are heard that we're working very very collaboratively uh and while we do that at the same time I want to make sure we're making the right decisions and also kind of bringing everyone along and moving things forward. And uh with influence and communication, which is a super important skill as a PM, uh it's it's it's sort of custom where it's it's hard to say which communication framework to apply when and what type of communication would land best with a certain stakeholder because it's so personal and you really have to empathize and understand folks. So, uh, very important skill to work on. I've been, uh, starting to lean in, uh, on Claude a little bit, uh, and training it to kind of give me ideas and use it as sort of a a brainstorming buddy in the process. And this has actually really helped me save time, uh, in meetings uh, in ahead of meetings in preparation. So, I'm going to go through a prompt that I created. I want to show you this really really cool uh, thing that Claude developed. It's called a prompt generator. So if you go to the console console.anthropic.com um you'd be able to create a net new prompts or generate prompts as well. And I created one for influence and communication where essentially I was I I didn't create this by the way. It generated it for me. I gave it some instructions and then it gave me very very precise information on uh what what the instructions are and then it gave me structured values that I can include as part of the prompt and how it would break down the problem, how it would think and reason through it and then what outputs it would give me in return as well. >> So a couple things I see here is one you use this tool that Claude has called prompt generator. It generated the prompt for those that aren't looking at the screen. The other thing I want it's XML is the name name of the game today. So it has these kind of like code block sections for the situation, your knowledge base, how you do strategic thinking, like all good prompts. It starts with the you are a blank and gives it a role. And then if we scroll down a little bit, you can see that it also uh lets you put in different knowledge. And then it tells you to provide your response in a very specific format that has a combination of analysis, your communication strategy, playbyplay, and then like an anticipated question. So this is a very structured prompt. Have you started to reuse this sort of like kind of XML style prompt or, you know, have you have you played with different things? What's your what's your approach or do you just use what Claude gives you? Yeah. So I I usually start with um generating a a prompt and then you see this button here, use cloud to optimize your prompt. This is extremely useful because what it does, you can give it feedback and in real time it will improve your prompt and then add it back in here. So now let's put this prompt in in the actual uh cloud project. So here we go into projects and this is our influence and communication coach. I'm going to update this. You can see here that you can set project instructions. So I'm going to update this prompt here into claude and save these instructions. And uh I've also given it a project knowledge. So I've actually trained it on free books from Project Gutenberg that consists of all sorts of things from literature like Dale Carnegie and uh tactics on persuasion and how to um win friends and influence people all of these cool books that uh that are all free and in the public domain. So, I was not expecting Jane Austin to be in your product manager strategic communication uh quad anthropic project, but I am excited to see it. So, you not only gave its prompt, but you kind of loaded it up with classic literature, classic writing on influence, persuasion, communication. Very, very interesting. >> Yeah. Uh Jane Austin is a favorite of mine. I'm not sure how many how many folks would use that for persuasion but I found it very enlightening. So now I'm going to walk you through a prompt that I have been working through and uh this is just a test case but we can try something live. How about how about we try that Claire? >> Let's do that. >> Amazing. So do you have a situation in mind or I can ask it to generate a couple of situations? >> Oh let's ask it to generate a couple situations. I'm curious what I'll come up with. So, I'm using Claude Sonnet 4, which is one of their reasoning models. And as a reminder, the the goal of this particular project is to essentially take a curveball problem at work and then come up with service structured strategy. So here what it's done uh it's made the assumption that I'm Alex Kim. I'm an AI product manager working at a healthcare AI startup and these are all of my stakeholders. very very uh you know sort of experts in the domain and have very uh strong opinions about various things and uh it has it's broken down a problem statement that I'm trying to solve where two weeks before a major presentation um these are all of the concerns that my stakeholders mentioned so it's actually created that for me and it has also described the core challenge here navigating ethics compliance competitive pressures stakeholder expectations and it's also kind of personalized it which is cool. So for people that aren't seeing what it generated which is a very realistic unfortunately uh challenge which is it generated this scenario which we're just going to pretend is a scenario we're facing at work which is I'm a PM and my head of clinical data just identified a bunch of issues both privacy and um accuracy data two weeks before we have a major presentation with a huge huge prospect and then it lists out everyone's different opinions and I I think PMs can really empathize with maybe not this particular situation but a situation like this. So you've created an AI project that says give me a sticky situation and I will help you navigate how to navigate the situation given all this all the stakeholders. And so it's listed the stakeholders. It's given you the core challenge. So what do we do? What do we get next? >> All right. So I'm going to ask it to use this and proceed next. Oops. And then this is what it's going to use your strategy framework that you came up with in the prompt to sort of break down the problem and give you a approach you might be able to use. Exactly. So it's now sort of analyzing uh based on the project information that I had shared with it and as we can see it's picked out the most relevant sections from the knowledge as well. So I had given it around 10 15 different books and documents but it found the top two and then picked out the right sections from those. Um so all of this it was able to search that information and and point to that and analyze uh the whole workflow. So it gave me sort of a situation analysis which which I found particularly helpful because um it's sort of validating that it understands the core challenge and the organizational context and some of the cultural factors as well. And here's my favorite part of the prompt where it actually goes in and it looks at uh some of the influential tech leaders um that that we have all heard and know of and it sort of inspires the user to say okay this is how Satin Nadila or Andy Jasse or Tim Cook or Jensen Wang would actually approach the problem. This is amazing. And so you're giving really real life examples or quotes that someone can say, "Okay, if I were to approach it like this other person," which I see I hear is a very common AI prompting technique, act like this person or act like you work at this place, um, can really bring a set of unstated principles to a problem that I think is really effective. Okay. And so it's come up with document examples, tech leader examples, communication principles. We all love how to make friends and influence people. So, some Dale Carnegie principles and then it's giving you some strategies you might be able to take here. >> Yeah. So, out of all of that vast array of information that I gave it, it picked out the three strategies which would apply. So, it it talks about the ethics of it, which I personally agree with quite a bit. So it's it's talked about uh positioning as privacy first as a competitive advantage and then it's it's talked about compliance and then u now it's going to give me sort of a collaboration strategy as well. So after analyzing the situation it kind of summarized for me here are the three things that you should do in terms of strategically approaching the situation. First make sure you're acknowledging and validating each stakeholder's expertise. second collaborate to discover an approach uh that demonstrates responsible AI leadership and then talk about the ethics aspect of it. So it's really kind of honed in there. So it's it's very helpful as uh of of course like a good PM uh product leader in the space would have this info but it's it's helpful to see that as a sort of a thought partner as you're brainstorming over this as well. And then it's actually broken down my pre-work for me on a day-to-day basis. So what are the individual conversations I can have with each of my stakeholders and what question to focus on and ask in that meeting. And then it's given me uh a minuteby-minute structure for the final leadership meeting as well. And and here is my favorite part. Not only the playbyplay in the meeting, it has given me back pocket questions that could be thrown as curve balls that I should probably have an answer prepared for. My mind is blown right now because I can just think of so many PMs that stress out so much when they're asked to take on a big problem that has executive kind of like leadership eyes on it. And so many just don't know how to approach those meetings. like who do I talk to? Who do I not need to talk to? what do I show up in this meeting and do and how do I show up to this meeting really really well prepared even when it's a challenging problem and you've created this kind of like project that just says here's my problem here are all the people and it's given you a schedule and an agenda to follow in these conversations which is super helpful and would help PMs not only save a lot of time and prep for these meetings but actually show up just more prepared and more confident I agree completely it has been extreme extremely helpful and uh yeah I I I'm actually also even happy to share this publicly if it's helpful to folks. >> People are definitely going to want it. So post in the comments if you would like this publicly shared so you too can prepare for your meeting where all your stakeholders disagree. Well, this has been so amazing. I love that you both showed us building something very technical that solved a really complex and expensive problem, but also something really practical that brings a lot of like heart and experience and strategy to a human problem that feels like it doesn't have a structured solution to it, but in a lot of times it does have a structured solution to it. So, we're going to hit you up with some lightning round questions and then I will get you back to whatever you are building next or your really tough meetings that you're very prepared for. So, I think the first thing I would say is so many of our listeners are in software and technology, but what you're showing is that the life sciences, biotech, hard sciences have so much to benefit from embracing AI. So what would you say to your peers in life sciences um who have maybe been doing something a a very specific way for a long time about how they can adopt AI and how it can transform the impact that it can have on the world. The thing that I have um I've been I've been doing uh quite a bit with my stakeholders and I have been encouraging folks as well my peers to work on is um figuring out where they spend majority of their time and which things which tasks do they actually want to focus on and which tasks do they actually want to delegate. So uh what what are energy uh sort of maximizing tasks and what are actually a drain on their energy and time and uh now you have access to AI and you you can take a plot project and create one for any of the use cases and tasks that you don't really want to spend time on. So I think the the best thing about this is it's not just for individuals but you can make GPTs and projects available to all of your stakeholders and all of them can benefit from it and really enjoy the perks of your great thinking. So I would encourage people uh in life sciences as well as um any any folks who are interested in to play around with these tools and not be afraid of actually trying to um optimize their own work because they might find it's a common thing for all of their peers in in the process. >> So speaking of being a little bit afraid, I mean you work in a highly regulated industry with you know patient data, you have to have the highest ethics. you really the burden of accuracy is so important, the burden of privacy is so important. How do you think about safety, privacy, alignment? Um, it's not a problem that I think a lot of the kind of like vibe coders or PMs and software companies really have to grapple with in their day-to-day, but you do. And I'm just curious how you approach it. What's your framework? Um, where does it stack rank in your priorities? >> Absolutely. I think with um most regulated industries, I've worked in finance before and in healthcare. Um I also currently work with family wellness where there are teenagers and kids involved. Compliance and privacy comes uh first. Uh but it's it's a bit more than privacy and compliance. It's the the ethics overall that uh one must prioritize. And um as I as it relates to LLMs, I think um looking at the benchmark data of which LLMs are performing higher on ethics and what edge cases have been observed in terms of safety is uh sort of the priority priority zero of a product leader in the process because um of course technical technical leaders engineers can make decisions on which models to use when but I think the the product leaders um can provide that lens lens of hey this model has shown for these kind of populations has shown not so promising results in terms of ethics and alignment with human values. How might we shift the process and how might we avoid using some solutions that might not benefit the end consumer. The other aspect to this is also um in particular with safety and alignment. It's important to identify how you would evaluate your solutions both offline as you're developing it but also in production. So the online performance monitoring of it and that's also an area where product leaders are are particularly helpful because they can provide both and share like here here are my like 100 things I want to know that the model does well on when it comes to safety. if it's not doing well, let's do a roll back. Let's find, you know, a quick solution. And uh often times, as as you know, in tech things move extremely fast. We're also rolling very quickly and we're we're operating one click faster than than usually we do. So I I think just like taking that strategic view is extremely important and uh those are some practical ways that I would suggest perhaps folks who are using MLMs could could apply. Well, you're setting the bar and I'm really glad that you are one of the the product leaders out there that are building these kinds of products because it's clear you take it seriously. Okay, last question have to ask everybody. When AI is not listening, when it gives you terrible strategic communication advice, what's your tactic? How do you get the LLM to listen? uh many emojis would it it does really well with emojis if you um and it it also seems to adjust when you when uh when you put an emotion. I I literally tell Claude, "Claude, I'm sad. I have a bug." And it immediately fixes everything. I do love the reasoning models that expose their reasoning because I just love reading like in the sidebar of cursor like user is extremely frustrated and disappointed in my response. I understand why they're so sad. So, okay, emojis, emotions, and XML. That's going to be the tagline of of this episode. Pria, it was so great to talk to you. Where can we find you? How can we be helpful? >> You can find me on LinkedIn and uh I'm happy to connect with folks. I'm also happy to share some of the tools I've developed and how you can help me just stay in touch and uh continue to engage and talk about all the cool things you're building. >> Amazing. Well, thank you so much. >> Thank you. >> Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review which will help others find the show. You can see all our episodes and learn more about the show at howiaipod.com. See you next time.

Summary

A product manager uses AI tools like Claude and Streamlit to automate the creation of a 60,000-page FDA submission document, drastically reducing time, cost, and effort while ensuring compliance and privacy in life sciences.

Key Points

  • A product manager at a biotech company faced the challenge of creating a 60,000-page FDA submission document, which would have taken months and millions of dollars.
  • The solution involved using Anthropic's Claude AI to generate structured XML documents, deidentify PHI in clinical data, and summarize trial results.
  • The AI system was built into a Streamlit app, making it accessible to non-technical stakeholders for easy use and collaboration.
  • The workflow saved significant time and cost, with potential savings in the billions for large-scale vaccine development programs.
  • The product manager also used AI to create a strategic communication coach for handling difficult stakeholder situations, using classic literature to inform responses.
  • Key tools used include Claude for code generation and reasoning, Streamlit for UI development, and LLMs for ethical and compliance-focused problem solving.
  • The approach emphasizes transparency in cost and performance, helping to justify AI adoption internally.
  • The solution demonstrates how AI can be used not just for automation but also for strategic decision-making in regulated industries.

Key Takeaways

  • Use AI to automate complex, repetitive documentation tasks in regulated industries to save time and reduce costs.
  • Build user-friendly interfaces with tools like Streamlit to make AI solutions accessible to non-technical teams.
  • Prioritize safety, privacy, and ethical alignment when using AI in healthcare and life sciences.
  • Leverage LLMs to generate structured prompts and frameworks for strategic communication and stakeholder management.
  • Use AI to create transparent cost analyses to justify AI investments and overcome internal resistance.

Primary Category

AI Business & Strategy

Secondary Categories

AI Engineering LLMs & Language Models AI Tools & Frameworks

Topics

AI automation FDA submissions regulatory documents Claude AI Streamlit PHI redaction AI for life sciences product management AI cost transparency AI safety

Entities

people
Prerna Kaul Claire Vo
organizations
Amazon Alexa Moderna Panasonic Well Anthropic Code Rabbit Lovable How I AI
products
technologies
domain_specific
technologies products use_cases

Sentiment

0.85 (Positive)

Content Type

interview

Difficulty

intermediate

Tone

educational inspirational technical professional entertaining