This is How I Design With AI Now

AILABS-393 VNx9Gy5pHZI Watch on YouTube Published January 24, 2026
Scored
Duration
9:07
Views
21,215
Likes
583

Scores

Composite
0.75
Freshness
0.25
Quality
0.89
Relevance
1.00
1,825 words Language: en Auto-generated

AI design has been getting better and better with each new tool release and model update. But good design doesn't come from relying on a single tool and hoping your app looks great. It's always a combination of tools and resources that make designs better. But more importantly, it's about using the right tool for the right purpose. Google's AI design tool, Stitch, just got a really cool upgrade that made UI designing much easier. And Versel also recently released something that has surpassed Claude's Chrome extension and has now become my go-to tool for browser testing. We've got a lot to talk about today because I wasn't expecting this combination to deliver such a solid design. Google Stitch is now available as an MCP for connecting the AI agent, enabling it to create and manage projects as well as create designs using the agent. With this update, I didn't have to give tailored prompts to stitch myself and let Claude do that for me. It offers multiple features like project management and the ability to create new projects and retrieve all of your active projects. It is able to manage projects by using list screens and by getting projects and screens. It also allows the agent to create new designs from a text prompt. The installation process is explained step by step in their documentation. It requires installing the Google Cloud SDK using curl. But since I'm using a Mac, I installed it using Brew and let it download the Google Cloud CLI. You need to log in twice, once as a user and once as an application because you need to connect it to a project inside Google cloud to enable the stitch API inside that project as it uses that project's resources for creating designs. Once we go through the entire process, you can connect Stitch to any editor. Since I use clawed code, I set it up inside it and all the tools were available for use. Now, someone has simplified this entire lengthy setup process for us and created a stitch MCP which handles the whole process from installing Google Cloud to project setup. It handles it all by itself. But this is unofficial. It's just an independent experimental tool though it works just as well for now. No matter what we are building, planning before implementation is essential. I was working on a mock technical interview app that doesn't just cover technical interviews. It also covers other types of interviews, all aimed at jobs in the tech industry. Throughout the whole video, I found numerous problems in the way that the Stitch MCP operates and how it should be better utilized for my design process. I've put all of those rules inside a claude.md file along with the source code for this app on AIABS Pro. For those who don't know, it is our recently launched community where we put together all our resources from this video and all our previous videos in one place. Be it prompts, our reusable commands, skills we create for our projects, and much more. If you found value in what we do and want to support the channel, this is the best way to do it. Links in the description. I prefer to plan my apps using Claude Codes plan mode because it iterates on every aspect of the generic idea I give it and writes a detailed document, but you can plan with the IDE of your choice. I gave my app idea to Claude Code in planning mode and asked it to create a UI guide for it, making sure the UI didn't look like the generic AI designs that it usually produces. Even though it was a really time-consuming process, I went through the entire planning process. It is highly important to read through the plan thoroughly because I even had to refine the plan to my liking by making lots of changes to the plan. I kept refining the plan until I was completely satisfied and in the end I had a plan exactly how I wanted it. Once the design plan was finalized with Claude's planning mode, it was time to get that plan implemented. Honestly, I prefer Gemini's design capabilities over Claude's models. So, I asked Claude to take the plan it had just generated, create an exact design by starting a new project using the Stitch MCP. It used the MCP tool to create a new project and then started generating the sections. It identified the project using a project ID and sent a detailed prompt to Stitch, which received the prompt and started generating the screens. It used Gemini 3 Flash to generate the first section. One thing I didn't like was that it sent prompts for all the sections of the landing page separately instead of creating one long design the way web designs are usually done. This might pose trouble when we have to implement them together in one app. So I asked Claude to create one long page design instead of individual components. Claude then produced a really detailed prompt which was received by Stitch and used to generate the entire landing page while keeping all the ideas from the section bysection approach intact. In the design, I really liked how it used references like comments and terminal commands to make it feel closer to developers and used references like comments and terminal commands to make it feel closer to developers. I used Stitch's preview feature to check how the UI looked across different platforms and it had implemented smooth hover effects. The design was responsive and worked well on both mobile and desktop. Once I was satisfied with the design that Stitch generated, I wanted to move it into my next.js project in which I was building the entire app around using Claude code. I asked Claude to use the Stitch MCP to get the code for the complete landing page design and implement it in the newly initialized Nex.js project I was working on. It used the get screen tool to fetch the screens. This tool returned downloadable links for code. So Claude used curl commands on those links to extract the code. Once it had the HTML, it became easy to implement it in the existing Nex.js project. It started integrating the design into the app, but it only made changes to the main page. .TSX file, dumping it all in one file and not following a proper component structure as is the recommended standard for React-based applications. For now, I ran the dev server to see how the implemented UI looked. The implemented design was almost identical to what Stitch generated aside from the hero sections text placement. It had implemented all the hover effects, added animations for bento grids and parallax scrolling in the background. To fix the dump of code into a single file, I asked Claude to use a proper component structure for better modularity. It refactored the code and reorganized it into a clean and well ststructured set of UI components and pages, making it easier to navigate. With the help of claude code and stitch, I created a complete application with a theme that incorporated the developer vibe with terminal aesthetic. However, it still had several UI issues. For example, the code panel should be editable because that's where users will type code during their technical interview sessions. Also, the question should appear at the top as its current placement creates a poor UX for those using the app. To test the application, I used Versel's agent browser. This is a CLI tool designed for AI agents. Built on Rust and NodeJS, meaning it's significantly faster than traditional browser automation tools. Installation is simple. Just run the install command and it installs globally across your system. It is limited to Chromium based browsers for now and isn't available in Firefox or Safari. Agent browser offers better features than Claude's Chrome extension, Playright, Puppeteer, or other dev tools. This is because the Chrome extension relies on running a full browser, taking screenshots, visually mapping pixels, and then navigating around on the UI. In contrast, agent browser is a CLI tool that runs as bash commands and works with snapshots instead of screenshots. Essentially, a snapshot is an accessibility tree of the entire page tagged with selectors to identify individual components. The agent uses these selectors to navigate the page efficiently. It doesn't share the session with your existing browser and runs in a separate browser. So, it won't be able to take actions that require your active sessions unlike Claude's extension which runs inside your own browser and can perform actions on your behalf. If you want to see the available commands, you can check out the GitHub repo which contains a detailed list of all core commands. These show how to navigate the UI, use selectors, simulate mouse control, manage cookies and storage, and even monitor network activity. For my app, I added a claude.m MD file and instructed it to use the agent browser tools for all kinds of testing. I also told it to use the help command if it didn't know how to use any agent browser command. You can get these rules along with the complete workflow guidelines in AIABS Pro as well. Even though this tool is a headless browser automation tool, we can also run it in headed mode by enabling the headed option which shows the browser window so we can see it working. I asked Claude to use the browser in headed mode to test the application UI. It first used the help command to see all the available commands and then it opened the browser. It used an approach similar to what we suggested with claude browser use. That is taking a full page snapshot instead of section bysection screenshots like Claude's Chrome extension does, which significantly sped up the testing process while maintaining the same level of accuracy. The agent performed many actions, navigating through the entire app and testing multiple features by moving across different UIs and checking the visual layout. It completed the entire testing process in just 4 minutes, whereas other browser automation tools would have taken much longer because it relies on accessibility trees, not the screenshot approach. It also identified that the code editor needed to be editable and implemented that fix right away. It then tested it in the browser by taking a snapshot, finding the editor selector, and using the agent browser's clicking and typing functions to edit a part of the code to see if it was implemented correctly. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by joining AIABS Pro. As always, thank you for watching and I'll see you in the next one.

Summary

The video demonstrates an advanced AI-powered design workflow using Google's Stitch MCP and Versel's agent browser to build and test a technical interview app, emphasizing the importance of combining tools for specific purposes rather than relying on a single AI solution.

Key Points

  • The creator uses Google's Stitch MCP to generate UI designs from text prompts, leveraging Gemini 3 Flash for design generation.
  • Stitch creates responsive, developer-friendly UIs with terminal-style elements and hover effects, which are tested across platforms.
  • The design is exported via downloadable code links and integrated into a Next.js project using Claude Code.
  • Claude refactors the imported code into a modular component structure for better maintainability.
  • Versel's agent browser is used for fast, accurate UI testing using accessibility trees instead of screenshots.
  • The agent browser enables efficient testing of interactive features like editable code panels and UI navigation.
  • The workflow includes planning with Claude Code, design generation with Stitch, and testing with Versel's CLI tool.
  • The process highlights the importance of planning, refinement, and tool integration for high-quality AI-assisted design.
  • The creator shares a complete workflow and resources in AIABS Pro for viewers to replicate the process.

Key Takeaways

  • Use the right tool for each task—combine AI design tools like Stitch with testing tools like Versel's agent browser for better results.
  • Plan your app thoroughly using AI planning modes before implementation to ensure a clear design direction.
  • Leverage AI agents to automate design generation, code integration, and UI testing for faster development cycles.
  • Refactor AI-generated code into modular components to maintain clean, maintainable codebases.
  • Test UIs efficiently using headless browsers that use accessibility trees instead of screenshots for faster, more accurate testing.

Primary Category

AI Tools & Frameworks

Secondary Categories

AI Engineering Programming & Development AI Agents

Topics

Google Stitch MCP integration AI design tools Claude Code Google AI Studio Gemini models Next.js Vercel Agent Browser terminal aesthetic design developer-focused UI

Entities

people
organizations
Google Vercel AI Labs Pro
products
Google Stitch Claude Code Google AI Studio Gemini 3 Flash Vercel Agent Browser Next.js
technologies
MCP Google Cloud SDK CLI tools Accessibility trees React Bento grids Parallax scrolling
domain_specific

Sentiment

0.75 (Positive)

Content Type

tutorial

Difficulty

intermediate

Tone

educational technical promotional critical