My AI Videos Hit 1M+ Views (Veo3 + Sora 2 Demo)
Processing Error
Summarization failed: HTTP error: 400
What if I can show you the exact workflow for how to come up with AI video ads that get hundreds of millions of views? Well, I can because today I brought on the viral AI ad man himself, PJ Ace. This guy is the number one guy when it comes to creating the most viral AI videos. And today he shows you his entire workflow, all his prompts, all the tools he uses, V3, Sora 2, this app I never heard of called Rev, uh, Chat GPT, his entire workflow, how he uses Figma, and we just go through it. And there was nothing that he held back. There are going to be a bunch of people make it to the end of this episode that learn how to create viral AI videos. So, I can't wait for you to enjoy this episode. People charge thousands of dollars for this sort of sauce, but on the Startup Ideas podcast, it's free for just a like and a comment. Enjoy >> PJ. By the end of this episode, what are people going to learn? >> They are going to learn how to make a video that gets 230 million views if you're lucky. But our entire end to-end process that we've used to scale to a six figure, seven figure agency in like a few months, it's like been the most insane uh liftoff of of a of the building of a AI native agency. And you know, I'm excited to teach everyone exactly how we do it step by step. >> Okay. And I you know, keyword exactly because we have a we have a word on this podcast called sauce. And we don't like to gatekeep the sauce. So PJ, can you commit to spewing as much sauce, sharing as many prompts, sharing your screens as much as possible so that at the end of this, it increases the probability of success to actually create one of these AI videos that can go viral. >> Full open kimono. Don't point the cam down. Not wearing any pants right now. You guys are going to get full exposure with this. >> Okay. All right. Let's rock. >> Sweet. So Greg, you had me on the pond a month or two ago. I shared about how I made that viral video for Kelshi that got 50 million views. And you know, tonight today I want to dive deeper because we have better tools. Obviously, you know, the the exponential growth of AI tools is like every other day we get, you know, a 2x improvement in quality. So, it's it's pretty nuts. So, I've got a lot more tools in the tool belt that I'm excited to roll through. Um, we recently had an ad that did like for David Beckham that did like 230 million views. Uh, but the process for that was probably a little too complicated. So, what we're going to do is do a little bit simpler one for one that uh company called Origin Financial. This got 2 million views. So, I'm really excited to basically dive into this ad. So, first we're going to watch the ad and then I will take you guys step by step through it. >> Beautiful. Let's do it. >> All right. In the past, it was easy to get bad financial advice. >> It's a time share in beautiful Pompei. Get your own cake. >> They said there was gold here yesterday. >> 20% off on the first unsinkable ship. >> This is your college fund. >> Of course you're diversified. You have Enron and Blockbuster. >> I made all my money in meme stocks. Hey, get off my yacht. >> You've had money questions for years. Origin finally has answers. >> Two boat rides for the price of one. Score. >> So good. >> Past was. >> So my favorite part about it is just how extra it is. >> Yep. Yep. And that's that. So that is what we tell our clients from the start. And so, you know, like at at at Genre, we we we typically work with like larger larger companies, they want to use AI video, but they're not sure like how do we do this and not get the pitchforks drawn out because people are like, you know, oh, you're cutting jobs or anything. So, we we always try and lead this is going to feel like a Super Bowl commercial in that it's going to be ridiculous. It's going to be comedy first and there's going to be like mostly jokes and a little bit of brand at the end that connects it to you guys. This is the only way we've found to like mitigate a lot of the kind of like AI disruptions, changing everything, customer push back. And so, like you said, if people are laughing kind of the whole time or they're entertained, one, they're more likely to watch the full ad, which is what brands want. And two, they're less likely to bust out pitchforks because clearly this is like stupid. It's ridiculous, but it's kind of funny. And you're like, "Oh, okay. I get I get what they're trying to do." So, I think a lot of people have played with AI video, but they haven't been able. They want to do what you do, but they don't know how, right? Like it it feels like they've they've scratched the surface. They've tried it. Maybe it didn't work how they how they thought it would work or so, you know, what is the strategy for actually like I've seen the result. How do I get to that result? >> Yeah. Yeah. So, we break it down into steps. Step one is scripting. Step two, well, let's let's just pretend you want to be a filmmaker and you want to work with clients, you know, like everyone first always asks like how much should I charge, etc., etc., you know, like and I and and the rates vary. Like at starting out, you maybe should charge nothing. Build up a spec portfolio of like, you know, essentially targeting a niche of like I love drink companies, beer companies, etc., perfume companies, clothing companies, etc. I always say like follow your interest to whatever videos that you like to make. you're naturally going to want to play with it and stick you'll outlast your competition because it feels like work. You're following it doesn't feel like work, you're following your curiosity, etc., etc. So, start in that domain and brands and then reach out to brands that are look alikes, competitors, etc. And always kind of serve a niche initially and then you can expand out from that. Like the first video I made, it was a pharmaceutical company video and I had a ton of pharmaceutical companies reach out. But then I was able to kind of expand my portfolio with the calcad and then some other stuff to where we now can do a full suite of services. But anyway, you start off uh on the scripting phase with a given client and usually we try and pitch three concepts. Now we work with proriters like I'm a good writer but I'm not a great writer. I'm a good director and I a great director. as you kind of expand over time, you do want to work with experts so that everyone's kind of focusing on their genius. And then eventually you kind of play the role of executive creative director or kind of producer that is orchestrating, you know, all the talent together. But I do think when you're starting out, you got to learn to wear all the hats and be kind of an army of one, which is where most filmmakers or creators kind of have that background of I know how to dabble in in enough of this. Um, so anyway, step one is the script, you know, and so we kind of went through last time the script. Let me pull up the script for this one. So, so to give a brief overview of like the entire process very fast, it starts off script. It goes on into uh chatbt to take the script, turn it into a shot list. It then goes on into creating a Figma board for laying out all your images and then generating all the images and then you move on into a uh V3 like animator to actually animate all the clips and then you putting it into editor. So that is what we're going to be covering over the next 30 45 minutes today. So let's jump in to the script. First um you know you have to start with like the big idea for the spot. So the big idea for the spot is we like recognizable IP uh to start so that people have the familiar with the foreign. So for this the opening shot we wanted was this kind of guy in Pompei which is you know iconic and then the mountains about to explode in the background. So that's kind of what we have in this opening shot. Actually later on we had the pyramid scheme which is kind of a funny he's like ah you like um and and then the opening shot here is like an exploding volcano. So you know the idea is that like he's trying to sell a time share and the kind of conceit of this whole thing is that in the past it was easy to get bad financial advice. So, it's like our writing team sat around and we're like, "All right, what's what's the worst financial advice we give out throughout history? Let's try and make it in kind of a chronological order that leads up to today." So, you know, initially we thought, you know, okay, Pompei would be funny if you're trying to sell a time share like on the foot of the dormant volcano. Um, maybe something with Maryanne Twinette where she's uh, you know, maybe she wants to buy more cake and there's like rioting soldiers outside. Um, it'd be really funny if we could have like a Titanic moment. It'd be funny if we had like Beanie Babies as like the investment thesis for your kid's college fund. Uh, you know, of course you've got you're diversified. You've got Enron and Blockbuster. And then of course we wanted to end in some sort of memecoin, you know, kind of stab at all the NFT bros. So, you know, always the the writing just starts off like basically loose concepts with Chat GPT. It's like, "Hey, Chad GBT, give me ideas for like iconic moments throughout history that incorporate like bad advice and then give me some like some potential lines for each of these." So then, you know, it'll spit out like, "Okay, here's like a bunch of different suggestions, bad advice." And Chacht is not funny. 50 times out of, you know, kind of like whatever, like 99 times out of 100, it's not funny, but one time it's good or it'll spark an idea that actually gets one of these great lines, you know? And a lot of times like it's not even like the line is funny. It's just the kind of comedic contrast where like the beanie beer college fund or the final bit is like you know two two boat prices for the price of one. You know the guys on the Titanic life raft like it comes out as you iterate and a lot of times you'll do half this commercial and then more ideas spring to mind of how you can dial it up elevate it. So does that kind of make sense? >> Yeah. So what I'm learning here is there's three ways to get people to you know continue watching. One is using existing IP that is re relevant that people understand right people have people know Maran Antonet they know Pompei so thinking about and these are public domain IP right so you want to use public domain IP that's one two is juxapositions so how do you incorporate jux ju justositions in a video because that's going to get people to like share it and do something you don't want them to be like lean back. You want them to be lean forward and and that helps. And third is leaning into, you know, internet native like what's trending maybe on the on the internet. Obviously, beanie b beanie babies aren't trending, but uh you know, the meme coins are trending, right? So you're you're more likely like the fact that you ended with that was a very smart strategic move >> because you shared it on X and a lot of people trade mean coins and talk about on X. >> Yep. Yep. Exactly. And and this feels like a little timeless. We wanted it to feel funny. I'll pull up real quick one of the other videos that were kind of similar in nature but different. So this was this like Kelshi video and we don't have to to watch it but basically you know the the conceit for this was like also historical moments. So we had like will Jesus rise again Peter you know the British are coming are they coming? It was basically odds throughout history of like different underdog moments you know like will the Trojans accept their gift? Uh will David defeat Goliath? you know, Wright brothers here. And this one was not really funny as much as it was like kind of like inspiring and it was big dramatic music and boom, it's your turn to defy the odds. So, this is kind of a similar structure framework to tie something to a brand that feels historic and feels like uh you know, universally relevant if you will. >> Sounds good. >> Yeah. Yeah. So, that's a similar kind of script and and framework. Um, so yeah, let's let's so let's just say you've got you've got your script. The client is like, "Hey, I like this big concept." So then you move on to the next phase, which is the exact scripting phase. And then we we don't work with like scripting software. We just do everything in Google Docs because clients can make notes on everything, etc., etc. Um, so yeah, once once you've basically locked down your script, we need to move into the next phase, which is taking it all into chatbt. So what chetchupt is going to be able to do is you can upload the script and you say hey give me a prompt for each of these as images. So we like to work with images on the basis that it's it's it's a lot easier and cheaper and faster to generate images for the commercial than it is to do everything text to video like you see here where you're basically generating the entire commercial kind of blind and and you're not sure if the client's going to like it. So, it's a it's a huge benefit to be able to basically start to do everything shot by shot. And so, that's why we basically take the script, we do scene one, scene two, scene three, scene four, scene five, all the way until the end. And then we start to fill it in with, you know, kind of each drafts of the shot. Now, here you're typically what you're going to see is like all of the shots that led to this, which is what I'm going to show you in a second. So, you know, Marian Swinette had like she's got different, you know, kind of poses and stuff and and you would typically see that reflected in here. You've got lesser versions. Um, but anyway, so we're going to use a platform called Reeve and or Rev. Uh, it's just app.rev.com and and basically it's it's pretty awesome because it gives you three different versions of whatever you're wanting to prompt. So, we go into chatbt. I'll basically say, "Hey, Chad Gabbt, here is my script. I need you to basically turn this into scenes and a shot list for each scene, and I need you to give you um shots, and I want the prompt to be structured. Let's just say I I have like a master prompt kind of a thing that I use for all my uh things. These days, it's not that sacred anymore. The image models are great and they'll get you good stuff regardless." Um, but you know, it's like a nerdy and ro uh nervous um Roman real estate agent strides, you know, arms out conversational gesture, blah blah blah. So, it's going to describe a lot of motion. Um, but we're we're only doing images to start with. So, you know, we kind of got something like this. I lost the the real photos, but as you can kind of see, a lot of these images are quite similar to what we had in the in the final ad. So, for all intents and purposes, these are the same prompts. Um, so we'll kind of go through through it. Now, the good news is once let's just say we we paste in uh a Pompei thing here or let's go Titanic. So, we'll paste it in a Reeve. And then what we're going to find is Reeve actually gives you three variations of the images. And now, thanks to Nano Banana like technology, which this also works in Nano Banana, I just find Reeves interface to be a bit better. And we also like the kind of photo realism we have on here. You can even run it through other uh enhancer AI to make the skin more detailed and all that kind of stuff. So anyway, it's going to give us uh frames here. Now, what it's going to say is it's going to suggest, can we move the passenger uh closer to the camera? Can we add more dock workers? Can we show the passenger? So, it's like, okay, give me actually let's suggest let's click this. So, we actually click this photo and it give me this as a closeup. Um, and then and then you can just hit enter and it'll actually give this image as a close-up because it's referencing the image and then it's actually going to do three variations of it. So now the editor is I'm sure as you remember this is light years ahead of where VO3 text to video was back in the day. You didn't, you know, you had to burn, you know, $4 a shot and you, you know, you just kind of pull the slot machine blind. But now you're really able to kind of go shot by shot and then start to build out, like I said, each shot sequence of like, okay, I like this for the wide shot. I like this for the close-up. I like this for the peasants looking in through the door. Then I'm going to cut back to her. Is that Are you kind of tracking with me so far? Yeah. I mean, it makes so much sense, by the way, to do images, like to do script to images, then the video. Like in retrospect, like we were crazy for going just like >> straight >> straight into it. >> Just raw dog in text. >> That was crazy. Um the other thing I was thinking about is this seems better than Nano Banana. This Rev app, >> it's it's pretty good. There's a lot of realism and the the structure feels right with this chat interface. Uh you know, sometimes it'll get like this guy looks like a psychopath, >> right? So, I would uh reroll this character, but there's a lot of like this looks super realistic. Like this doesn't look AI generated at all. Um and and again to to do it, it's like let's make this character uh more of a wide shot and include the captain yelling at him. Uh you know, so it's just very iterative. And for non-filmmakers, like you really don't have to know a ton about angles and lighting and camera movements and all those things. It just kind of builds it out with you conversationally, which is kind of the future of of this. Um, unless you go into image model world models. I think we're going to see that start to infiltrate this stuff uh soon. But anyway, if if that kind of makes sense, basically that's what we do is we basically go into copy code uh paste code, copy code, paste code, copy code, paste code, you know, just back and forth until we have kind of all the the shots here. Yeah. And I mean, you know, it's it really starts you you can kind of regenerate each of the images, but you can kind of see how you can build out multiple angles, coverage of a scene, consistent characters, etc., etc. uh you've got, you know, the guys smoking in the thing and um that's, you know, that's essentially the core of the the image generations here. >> Cool. Okay. So, we've got our images. >> Yep. So, we've got we've got all our images, you know, we're going to save them. I think we just hit the download button here, and then we're going to start to slowly put all the downloads into kind of our our master boards here. Um, like I said, typically I'll have a bunch of like alt versions here because how how we work with we work with a dedicated writer. We work with a dedicated director and then we work with AI cinematographers. And these AI cinematographers are the ones who are going to take the uh script and the director's treatment and they're basically going to generate a lot of these images and kind of fill out these boards with multiple options. You can see it here. This is for this David Beckham project we did with IM8 where the director will say, okay, I need this eyeball. They'll just upload a reference shot of an eyeball, but they're like, you know, we want it to be unique to the project, and boom, the AI editors churn out a bunch of examples or this opening shot. We need like a cool sci-fi setting, multiple angles of coverage, and then our AI cinematographers just do bunch of shots, Serengeti, etc. So, this is this is for that IM8. So, as you can see, the more complicated the ad, the more, you know, like you can have hundreds of shots for just one shot on the shot list. And I do find that like when you're doing more stylized stuff like this I am David Becca ad that we did that got like 230 million views, you want to give a lot of reference images here for the tone, the style. So, this is what this is kind of like if we're doing like the basic version. This is like expert mode. Um, you know, you'll have like the line of dialogue here, red or green. It's a simple choice. So, this like the first 10 seconds and then the director is coming in and he's saying wide shot of an abandoned facility. We've got uh in this case, we had the tennis star arena. Um and she was able to kind of we were able to deep fake shots of her. Um so, you're just having like a lot of shots in this one setting and then our, you know, artists are doing these and then we're basically going to have the director come in and make selects of of all these shots. So, that's what it looks like on more of a a complicated, you know, shot. So, um, that kind of brings us to we generated most of the shots on the shot list here. We we laid them out to this here and then now we go on to V3 animation as our our next phase. Now, I do want to note that there's a lot of programs right now and I'll list them off briefly. Uh, VO3 is probably the best model at the moment for making characters talk. Um, it's just really realistic. The motion's great. like he's walking up here, camera pans to that. So, you know, it's it's even worth um kind of noting how we do this. So, even though we have an original prompt for the Titanic, which is uh here, uh I don't know how to find the prompt. Um it's it's like this, you know, bursting dock blah blah blah snag tickets. So, even though it's a text model, I'm still saying the dialogue and then I'm and then I'm just actually just uh copying and pasting that same prompt into VO3 and I'm uploading. So, instead of text to video, we're going to do frames to video and we're going to upload that shot that we just uh downloaded from Reeve. And then Natax is our first frame. Um, and then that is going to be a similar prompt here. Just snagged first class tickets and it's going to be similar and chatbt can also help you. It's like, "Hey, JGBT, here's my image prompt. Now, I need it to be like an animation prompt that you need to kind of tell me what's the camera movement here. Are we starting with him? We're going to pan to the ship next." And that'll help you really make these shots dynamic like you see here. >> Yeah. And by the way, to you, PJ, cuz you're literally an expert, you probably understand camera movements, right? But to the average person listening to this, including myself, like I don't even I barely know what a camera movement means. You know what I mean? >> Yeah. Dolly, jib, do a three/4 turn. Yeah. Like honestly, Chachi PT gives you pretty great suggestions and you can also be stupid with it. Like move camera left to end on the ship, you know, like it doesn't matter. It'll it'll do the same as like some complicated director term. Um, and so yeah, we I found that like V3 is pretty pretty good. Character performances are the best. However, if you're not having characters talk, that's when we'll move on into some of the other animations. Uh to highlight, there's a platform called Cling with a K. Uh great 1080p, actually might be like 4K now. Another one called Luma Labs. Another one called um Seed Dream or Seed Dance, which is from by Dance, which is from Tik Tok. Um great models. Another one called Miniax. Honestly, they're all kind of similar these days. So that's why most people just default to V3 because it will um do the best talking performances at the moment and it's Google like their terms and service their indemnification like for working with clients a lot of times we'll just tell them like you know if you just want us to use Google be from start to finish you're still going to have a great experience because nano bananas's got you covered on the image generation and then V3's got you covered on speaking and any animation and Google's broader indemnification policies covers any commercial production like They're solid, great training data, ethically trained, etc. >> And even Gemini on on scripts and stuff like that, right? >> Yeah, exactly. You can you can do this and we're doing this for like an upcoming project for Google. Like it's pretty seamless to go Gemini now. Technically, I think you can even generate image like text in Gemini, images in Gemini, and then even do video generations in Gemini. But if you're going to use the the the Google Suite, it's typically best use the Gemini app for that. and then go to like uh labs.google for kind of images and and videos and stuff. They also have another thing called AI Suite. It's Google's a they have great always foundational technology, but the real struggle for them I think always is this application layer that sits on top is how do we make it seamless and cohesive? So, they're still working on making this all like one big filmmaker suite. But in the meantime, um, Gemini app for or ChatgBT for images, uh, I'm sorry, for the scripting phase, uh, you can use for images either Nano Banana in Google's AI suite. You can also use it on a platform called Freepick that has all of the image models. So, let's actually just look through here. So in freepic you have if you go to generate images you can have the entire uh models from Google you can use nano banana you can use flux um which is black forest labs you can use crad like basically they have all the models in here now the downside to using an all-in-one platform for image generation and video generation is they're using API pricing so it's just going to be a lot more expensive than if you were to go in. But there's a number like for instance um image generation these days is practically free. So you get unlimited image generation on free pick for like I don't know 20 bucks a month. Like it's nominal. Um and then I would just buy a subscription to Google Flow cuz basically for like I think it's like 120 a month you get I think unlimited on fast mode which is important to note that you shouldn't do quality mode. you should do fast mode just because it's like 80% of the same quality and I I think it's free. And then you can do portrait, you can do landscape and then the questions outputs. You could do four four outputs if you want um for that. And and like I said before, just if you want to do frames to video, you can just add uh a starting frame here and then you do your prompt which makes the characters talk and etc like that. >> Beautiful. >> Yeah. Yeah. So that's that's kind of the the process in a nutshell and then obviously you take it into your editor and as you saw you know essentially we're just putting you know generated clip generated clip into the timeline kind of sequentially and this ad was very simple to to edit you know the image model or the video models like V3 will add sound they'll also add um you don't want it to add music but you know it kind of like you just put a basic music track it already has dialogue and sound on it so the edits are like just extremely simple things like you really don't need to have a ton of editing experience now with everything so laid out for you. >> So you didn't well I guess yeah V3 doesn't put on a music track right so you have to go and find uh I guess you there's story blocks is the popular one. >> Yeah EP most people use Epidemic Sound as well cuz for like $9 a month you can get unlimited songs and they've got a pretty big selection. The reason you wouldn't want V3 uh or even Sora to like generate a music track is because it's going to generate a new music track for each clip because it's only giving you a a piece of the pie. You want, you know, the whole thing to be >> no ingredients as far as music goes. But sound effects are great. >> And then how do you This is probably a dumb question, but like what software are you using to actually put it all together? If you're wanting something free, um you could just use Cap Cut is what a lot of people that are just want something simple. It's like an online editor made by Tik Tok quality. Uh most middle people like in the mid-stage. I think Da Vinci Resolve is actually free as well. And then most of the industry runs on Premiere, which is like 19 bucks a month, I want to say. Um I like Final Cut just cuz it's it's it's probably the easiest platform. performance like iMovie on steroids. Um, but they all work. I mean, they're all great. >> And anything else we didn't cover on this whole process that you want to make mention? >> There's some interesting tricks like you can do these cool things where basically in V3 instead of adding this as the starting image and then describing what Beanie Babies look like. You can actually include in the upload image an image of the Beanie Babies that you want to uncover. And then like you didn't even say a no that. So then when I uploaded this as the starting image and then I said he uncovers this, Google actually outpaints this and he rolls it up to be the the the beanie babies that were there. So, if you look here, this is kind of like a complicated thing, but um it's it's it's just one of those details where he starts Oh, we kind of start with him midway here, but basically the clip actually started with it fully closed, but we had to give it a reference image for what Beanie Babies looked like, and they had the big Ty tags or whatever like that. So, just there's weird hacks like that that you can do where you can put picturein picture to describe what you're you're wanting to like pan the camera over to. Or if it was like a a script like a like a if they didn't know what the Titanic looked like, you could put a a picture of the Titanic here and then you would say remove this image and then pan over to this ship. And so then it would pan over to this ship and then it would look like the ship you did at the starting frame. >> Yeah, >> dude. This is crazy. Like even the beanie baby shot like if you if you I mean I just you know recorded this docky series and I saw how expensive it is and how timeconuming it is to shoot things like that shot would you couldn't well you couldn't even do that shot right cuz >> Oh yeah >> think you couldn't you know the the ty aren't that big and it's so much more impactful with the tys being big. >> Yep. Yep. Yeah. This ended up actually being a different image, but but it's kind of similar here. And we had a hundred variations of this. Um cuz the issue is like you can't you want to generate this as kind of your end frame, but you can't have it go in reverse to like pile in. So that's why the the picturein picture kind of helped so that the AI understood what we were like opening it to. The other actually thing you could do here is this. you. This is a good case for just doing straight text to video where you could just prompt the Beanie Babies here. Um, but anyway, those those are just kind of the the details. So, so that's that's it in a nutshell. And we've we've used this process, like I've said for a lot of our, you know, kind of main viral videos. Like if we go to my page here. All right. So, to so to show the same process in another video, here's a video we made for a company called RAMP. that's like a financial services credit card company that makes it really easy to, you know, essentially expense things. Um, same process. I'm going to walk you through it right after we watch this ad. >> Richards, >> auditors, show me your mileage locks. >> It's audit season. >> Wait, guys. We're good >> with intelligent receipt capture. Ramp has your back. >> So I can just take a photo. >> Ramp. An audit doesn't need to be a horror movie. >> Let's get out of here. >> Audit this. >> That was fun. >> That was fun cuz it's just like things blowing up, you know, dark. Like I That's a That's a That's like a horror movie in an ad. >> Yeah, that's exact. That's exactly it. So, as you can see here, we start off um you know, initially we did it all in actually text to video and we didn't like the look. This is when image to video kind of first came out. So, we did the ad in text to video and it just looked too like AI, like the faces were morphe and it looked like So, we actually we made the whole ad and then we threw it all away. But that's when we first this is like two months ago we made this. This is when we first realized the power of image to video and how much better kind of the base images could look like if you did everything um with tech uh im text to image and then image to video. So similar process how to writers write a a great script from them. We had a director basically create a shot list of how it all flowed. And then from that, we had our AI uh director of photographers come in and basically just give us variations for each shot on the shot list. Like we need some, you know, zombie coming up on the plane of glass. We need people reacting to it in horror and then we need kind of big zombies breaking in and you know, like look at this. She's awesome. So really upped the fidelity of a lot of this stuff. um this is this is great and and it really just made everything kind of come come to life. Um and our directors were able to basically come in and just select the the shots that they they loved. So yeah, this made it like so much faster and obviously like visually like it looks so much more cinematic. the prompts that we use then we keep in this and we just bring it over here and I can send them to you Greg and and again this was all done in Reev or Rev as well and the prompt is something like this and then our team can come in and basically if they want to like do a slight tweak they can just take that same prompt and then they can do further variations here once they like like a given character they can kind of prompt her in different um angles and poses. Yeah. The the characters don't look that AI generated, right? Which is so cool. >> Yeah. Yeah. And it just gets better. Like you can even run a second pass through a program called Enhancer.AI where you can even add like acne and uh details to her face. >> Right. >> So it's uh >> rough her up a little. >> Exactly. >> Make her more normal, you know, cuz some of these people look too perfect, right? Yeah. It's >> like they've got this AI glow and you gota, you know, that's not that's not how we look like in in real life. >> Yeah. Most of us at least. No. >> Yes. Yes. Uh so yeah, man. That's that's that's the the the spot we did for ramps and that's kind of the process we we rinse and pee for different clients. So I think this is a major opportunity right now. And the real question is how much is Sora going to disrupt this existing workflow? Cuz as you saw, I mean, we don't I don't have to show it because everyone's been watching it for last week, but basically it's like Sora auto does the script. It automatically does the image and video generation, does sound effects, it does music, all in like a 10-second bite. Now, that's the problem is it only is limited to 10 seconds, but I was watching behind the scenes interviews and they're like 30 seconds is coming, 60 seconds is coming, character consistency is kind of coming. uh the ability probably to tweak individual clips so you can edit it is coming. So that's the question like we're disruptive to the big the big agencies. This is very disruptive to us cuz it essentially makes our 6 to 8 week timeline down to what like a week or less once you're able to kind of tweak and the quality gets better which it will over the next three to six months. So I I mean what it's going to mean is it's going to mean we have to do a lot more volume. We're probably have gonna have to lower our prices, do higher volume, but it's a good thing for brands because basically brands will be able to release a new ad each week >> at a at a lower price point. And for us, it's good because we'll just do retainers with all brands. And it's like, hey, we're going to optimize for comedy writers and supervising directors. We're probably going to minimize our dependency on AI cinematographers or some of these other animator roles that are automated by Sora. But it's going to be a wild next six months. Like I thought we'd be here a year or two from now. We're here next month. >> It's clear that Sora is going the direction of productizing pretty much the entire workflow that you've shared over time. It's just going to get better and better. >> That's right. >> That being said, I think like the limiting like okay, how how can you take advantage of this opportunity? and and the the way to do it is the the limiting factor is just great ideas ultimately. So from your perspective, it's like how do I hire people who've got really great great ideas, scroll stopping ideas that that aren't thinking like uh everyone else? And then, you know, from from my perspective as like a a founder, like you know, I want to create these ads for myself. Um, you know, I'm just kind of like, well, I just need one really good idea a week, >> right? >> And if I if I can get one really good idea per week, then and and one of these, you know, ads pop and get me the right, you know, cact ratio or go viral. Um, that, you know, that is that sets your company on a trajectory. Um, yeah. I think that's that's going to be the real science is how do you in an age where like so like the Stephen Hawking clip, have you seen it? >> Yeah. Not only have I seen it, I've watched it like five times. I'm obsessed with that clip. I put that clip in. >> Okay. So, the thing I love about this clip is well, one, it's just stupid, but two, it's like the physics are like photoreal and like it's just kind of like maybe this could be a sport. It's like Rocket League. It's like maybe this could be a sport. Um, so I if I were a brand, I mean, this is the real question. Like if you're Red Bull, do you stick a Red Bull in his hand as a close-up shot and then cut to like the wide of this and then he's doing this and like do you have to get permission from Stephen Hawkings estate? Like the real question for Sora is like if the estate of Hawking um uploaded him as a cameo and then charged likeness for brands to like pay a royalty fee or something that's going to unlock like meme branding as like a thing you know like you can like a lot of these estates of dead uh whether it's like maybe Tupac or Kobe like I think that's going to open up in in the in lie of that you're going to have all these historical figures like Einstein or you Play-Doh or whatever. And like it's just going to be open season kind of like we were just showing you with our ads on open-source IP on these characters that brands can work with. >> Yeah. I saw Sam Alman yesterday or the other day said uh I hope Nintendo doesn't sue me. >> Oh, everybody's about to sue him. But he just raised like or you know like they're like at $500 billion as a private company. So I think they'll be okay. Obviously, they they did the dirty playbook of like no guard rails and restrictions to get it to the top of the app store and then they nerfed the ever loving crap out of it. So, it's not that fun these days to you just constantly are getting like, you know, cannot generate this, cannot generate this. I do think as time goes on, they're going to get certain IPs to opt in and then like those IPs will be revitalized. So, I was talking to like someone who is in Japan and he owns a bunch of like old IPs and I was like, "You've got to talk to Sora to get like yourself opted in so that we can make episodes for your old show." And now let's pretend it's Thundercats now He-Man or Thundercats becomes like a huge IP because everyone can remix it. >> Yeah. I mean, this just gets me thinking like I want to buy IP. You know what I mean? Like IP is so undervalued right now. >> Yes. that once because we know that OpenAI and the like are going to do deals with these IP makers. It's only a matter of time. >> Yep. Yep. I think the other opportunity is like production companies, agencies, like small creators that can act as this gobetween where like okay so the the old like Thundercats and like I mean it's probably all owned by like Hannah Barbara Cartoon Network whoever owns the Universal I forget who all the stack but essentially like or maybe it's like minor Japanese ones that are still like loved but like they don't have the internal structures to know how to do the prompting. So what we're going to see in my head is like I want this like exchange marketplace almost like Fiverr where you have brands that are opting in like okay train on my data but I need people to do the labor of you know a small team and I I really think it could be simple. It would just need to be like uh a good Hollywood writer, uh a good director, a good art director that can maintain visual like cohesion and then uh some sort of general editor uh all around and they can AI generate voices, you can hire real voice actors. Just depends on like the budget. I think the budgets will get down to like to do another like He-Man or Thundercats, you could probably do it for like 30 grand on the super lowest end between all the roles if you're doing it like at scale and at volume. And then some of the bigger IPs, more recent like Pokemon, you know, they'll still want to spend a couple hundred grand an episode, but it gets price gets super compressed. PJ, I I love having you on, man. I feel I feel you are you really are the viral AI ad madman. You really are. And I appreciate you sharing the sauce with us. Um, for folks listening, in the show notes, I'll include where you can follow PJ on X, I'm I'll include his newsletter. Um, and I'll include a link to genre.ai. Is there anything else I should be including, PJ? That's it. No, I uh if you if anyone wants a great course, I always plug my buddy uh Ror Heath's course called GenHQ. I can provide you a link uh for that as well. I don't yet have a course. I'm trying to do one by the end of the year, but in in lie of that, uh, RORO's got a great top to bottom and it's like 99 bucks. It's like fantastic. >> Beautiful. Yeah. If you send a link, I'll include in the show notes. People can check it out. And, uh, my my advice to people is just get your hands dirty. Like this was a howto on how to how to create high quality cinematic AI videos. And sometimes you just got to you got to get your hands dirty. So >> yeah, I think I'll be back on in like a couple weeks once Sora unlocks like pro mode and you're able to like dig in because I I do think that like for everything I laid out, it's important for now, but it's going to be baked into Sora's kind of suite where you can edit clips. You can kind of double tap into this and this entire process is going to get like a lot faster and a lot cheaper. So, like you said, Greg, the biggest takeaway for any creator is just like start creating, start putting shots on target, consume the viral content, and then figure out if I were a brand, how would I find this palatable to have my image and, you know, kind of associated with this? And so once you create a portfolio that's full of like branded viral content, that's when the keys to the kingdom unlocks and you can make a lot of money and grow a huge agency. >> PJs, you have to come back on again. >> Let's do it. Okay, talk soon, Greg. Thanks, man. >> Take care.
Summary not available
Annotations not available