How to Use Figma AI To Make Sites 10X More Beautiful

AILABS-393 _-FEgdQ_L4I Watch on YouTube Published October 25, 2025
Scored
Duration
9:55
Views
106,507
Likes
2,126

Scores

Composite
0.66
Freshness
0.00
Quality
0.85
Relevance
1.00
1,936 words Language: en Auto-generated

Figma has always been an amazing tool, but many of you aren't correctly using its AI features. Now, Figma isn't going anywhere. It's not getting replaced by AI anytime soon. But not using these features correctly or not using them at all is a huge disadvantage. You're simply not working as productively as you could be. So, in this video, I'm going to be fully utilizing those features, showing you the systems I've developed, and showing you how you can use AI to turn your ideas into designs more accurately and much faster. Figma make is a new AI feature they've released. It allows you to prompt designs into existence with working code. Since Figma has massive amounts of actual design data to train on, it can actually produce good designs. I've seen people make some amazing things with this online. However, when I tried using it myself, I realized that you can't just prompt it and expect it to create fully functional apps that actually look good. I gave it a full prompt that included everything. Every page, every color without any organization, just everything dumped into a single prompt. It generated something, but the problem was that it wasn't a fully functional prototype. The pages were disorganized and the implementation wasn't proper. Since there's code behind it, it needs to be organized and structured to function correctly. Only then can you actually see how the app would look or whether your concept for the app even makes sense. So, I came up with this workflow for Figma Make, and that's what I'm going to show you right now. The workflow heavily relies on planning. Good planning is the only way to get the results you want. If you're confused, your AI agent will be confused, too. First things first, you need to paste some standard instructions at the start of every Figma Make session. These are prompts that enforce best practices for writing code. The very first step is planning out your entire app before diving into Figma. You can use any model or AI agent for this. I use chat GPT because I really like its web interface. You just need to brainstorm with it. For example, I was brainstorming an app that helps people find workspaces outside their homes to get work done. It helped me outline everything from the target audience to the core features I needed. After reviewing everything, I made some edits, finalized the plan, and got a clear picture of what my app was supposed to be. Once you've finalized your plan, you can come back to the workflow. This is a prompt-based workflow and the goal is to prepare one big prompt that will generate a highquality design through Figma make. Step one is getting a complete app overview. I'll link the full workflow in the AI labs Discord resources section. Once you run that prompt, it will outline everything inside your app. This context is crucial for Figma make to understand what your app is about. Step two is defining the architecture of all your pages. This was one mistake I made earlier. Each page needs its own structure and without that inconsistencies pop up between screens. This prompt will map everything out so you can review and ensure that's exactly how you want your app structured. Step three is creating the complete design system using chat GPT. Generate specifications such as colors, fonts, and overall style. At this stage, read through everything carefully and finalize what you want. Once this is locked in, that's exactly how your app will look in Figma. After completing all these steps, you'll combine everything into one big prompt. There are placeholders for the outputs from steps 1, 2, and three. Copy the responses from chat GPT. Paste them into the combined prompt and then go back to Figma make. First, give Figma make the best practices prompt. Then paste your final big prompt. It'll take some time to process everything. Figure out how to structure the code and generate the design. Now, one of the best things about using those best practices is that the design Figma make generates becomes really responsive. Everything is laid out correctly and we actually end up with a usable prototype as opposed to the messy version it produced before. This entire thing was made by Figma make. I genuinely like the design and I think that's largely because of all the data this agent has been trained on. It created a clean well ststructured app where everything was organized exactly as I had discussed with it. Even the dark mode was fully functional and the responsiveness was excellent. If you're only prototyping, you can stop here and send the prototype to developers. But if you're a solo developer handling everything and need to actually implement this, this is where it gets interesting. Figma make outputs a React app with different pages already set up. One of the biggest advantages of following those best practices is that everything is clearly structured. If you wanted to convert it into a Nex.js app with a full backend, it would be really easy to do so. To implement this into an actual app and connect a backend, you're going to need an AI agent. In my case, I'm using Warp Code, which is actually sponsoring today's video. I've been using it for a while now and really enjoying it. So, special thanks to them for supporting the channel. Since Figma makes generated code uses the same format and dependencies across all components, I noticed a lot of repetition. So, I created a custom prompt to handle this. In the prompt section, I have this universal full stack conversion prompt where I put all my instructions. This tells the agent exactly how to handle the implementation. I've already implemented this using warp code. It took some time, but I turned on auto mode and let it go through the components and handle the conversion automatically. Right now, warp code is also building a functional backend using mock data. The mock data will work until Superbase is fully connected. If you're a developer, you can open up the changes right here. It gives you a nice environment where you can check any specific files to make sure the model did the implementation correctly. Once you're done reviewing everything, you can simply push the changes to GitHub. If you're wondering how I got the code from Figma make, you just download it. Then enter your custom prompt into warp code and it converts everything into the app you see here. As you can see, the entire Workspot app is fully functional. It's running with all the mock data that was generated by Figma. If I want to connect real data, I'll just integrate the Superbase back end. Now, what if you don't want to create your own designs? What if you find Figma designs online that you want to use directly? For that, Figma launched the dev mode MCP server which pulls information from your design files and sends it to your coding agent. One thing to note, there is a remote MCP server, but I personally prefer using the local one. This means downloading the Figma desktop app instead of using the online version. There are also open-source and free MCP servers, but setting those up can be complicated. I have a video on that, so I'll leave the link in the description. For this example, I found this mockup of an expenses app online that looked really good. Let me walk you through the setup. First, switch from design mode to dev mode in Figma. Once in dev mode, enable the MCP server. To configure it, you need a configuration file from their official site. I'll leave the link in the description. Copy the configuration, then head back into your coding agent. In Warp Code, type/comand. And the first option will be to add an MCP. Paste the configuration, hit save, and your MCP is set up. Make sure your coding agent is connected to this MCP server. One important setting, the image source option needs to be set to download instead of local server. This allows the MCP to download any images from your design and implement them into your code. Now, here's how the workflow actually works. First, the agent fetches all the metadata from your design pages, including links. Using those links, it captures screenshots of all pages to understand which are actual screens versus promotional material. Then it creates a pageelinks.mmd file containing links to all extracted pages. If your design has more than 10 pages, it creates a design guide to manage the data. With fewer pages, it skips that step. From there, the agent generates a to-do list based on which pages need implementation. Warp code started its tasks automatically listing all the prerequisite steps to clone the design followed by each page. Then it implemented everything. Another important aspect is the implementation rules and general structure for your text stack. You can always change it. Just give it to any model and it'll adjust everything. There are fundamental things you should know about using the Figma MCP. Images and icons should always be used as image assets. Long screens need to be scrollable and cards should be horizontally scrollable. I've gathered these and other smaller rules through testing and put them into a configuration file that really helps with implementation. Let me show you the difference. Here's what it looked like when I implemented it without that configuration file. At first glance, it might seem fine, but once you open the app, it's completely broken. Things aren't implemented correctly and layouts are all off. Now, here's what it turned out like with the configuration file. The difference is massive. Most of the UI was created nicely and looked great. You'll notice some minor differences, like these icons should be white to create better contrast. That's an easy fix. Just prompt your AI agent to change the icons and it'll handle it. There were still a few issues where icons weren't implemented properly in some spots. It's a work in progress and I have to guide it a bit, but overall the implementation was solid. You'll find that configuration file linked in the resources. Now, here's where things get even better. Just like the Figma MCP gives your AI agent context of your Figma files, the Shad CNMCP gives your agent context of Shad CN components. There are tons of Shad CN components and variants, plus great UI libraries built around them. One of my favorites is aceternity UI which has amazing animated components. The best part all the context for these components can be passed to your agent. So if you already have a design implemented using the Figma MCP, you can easily enhance it by integrating real Shad CN components. Just add the Shad CN MCP server to your agent just like you did with the Figma 1 and you're good to go. I'll link videos on the Shad CN MCP in the description. Warp is free to try, but for a limited time, my friends at Warp are offering their Warp Pro plan for only $5. Use code labs to redeem. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next

Summary

This video demonstrates a structured workflow for using Figma AI (Figma Make) to generate high-quality, functional app designs and code, emphasizing planning, best practices, and integrating AI tools like Warp Code for full-stack implementation.

Key Points

  • Figma AI (Figma Make) can generate functional app designs and code, but only when used with proper planning and structured prompts.
  • A successful workflow requires upfront planning: app overview, page architecture, and a finalized design system.
  • Using best practices prompts ensures organized, responsive, and usable prototypes from Figma Make.
  • Generated code can be converted into full-stack apps using AI agents like Warp Code, especially with a custom prompt for consistency.
  • Figma's Dev Mode MCP server allows AI agents to extract design data and implement it into functional code, with better results when using a configuration file.
  • The ShadCN MCP server enables integration of ShadCN UI components to enhance and modernize AI-generated designs.
  • Using a local Figma desktop app and configuration files improves reliability and implementation accuracy.
  • The approach enables solo developers to build functional apps quickly by combining design and implementation AI tools.

Key Takeaways

  • Plan your app thoroughly before using Figma AI to ensure accurate and functional outputs.
  • Use structured prompts with Figma Make that include app overview, page architecture, and design system details.
  • Leverage AI agents like Warp Code to convert Figma-generated code into full-stack applications.
  • Enable Figma's Dev Mode MCP server with a configuration file to improve implementation accuracy and UI consistency.
  • Integrate ShadCN components via the ShadCN MCP server to enhance the design and functionality of AI-generated apps.

Primary Category

AI Tools & Frameworks

Secondary Categories

Programming & Development AI Engineering AI Agents

Topics

Figma AI Figma Make Figma MCP AI design workflows AI to code React Next.js Warp Code Claude Code Cursor AI

Entities

people
Chat GPT
organizations
Warp Figma
products
Figma Make Figma MCP Warp Code Claude Code Cursor AI ShadCN Superbase
technologies
AI agents React Next.js MCP servers Prompt engineering Auto layout Backend integration Frontend development
domain_specific

Sentiment

0.85 (Positive)

Content Type

tutorial

Difficulty

intermediate

Tone

educational technical promotional inspiring