How to 10X Your Claude Code Workflow
Scores
Development in this space never truly stops. Every month we get new models and with Anthropic, every new model released brings new qualities and new features that make us think of AI coding in a completely new way. Today we'll be looking at a new agentic framework for AI coding that actually takes advantage of a new feature that Claude Code just released. This framework relies heavily on a sub aent structure with agents from Claude Code, knowledge persistence to actually record learnings for next time and many other things. Many people in the AI labs Discord community DM'd me about this new framework and I just had to test it out. So, let's get into the video and find out if it's truly worth it. At its core, this framework follows a simple philosophy. Each unit of engineering work should make the next unit much easier. The workflow consists of multiple steps and several tools you can use to apply this framework. There are distinct stages. First, there's the plan stage where you provide your PRD and details about what you want to build. For example, in my case, I told it I wanted to make a GitHub repo research manager and gave it a PRD outlining the basic features I wanted. I also listed the tech stack. This framework works through custom/comands. You can see that we have the plan command from compound engineering. At this stage, you give it everything about what you want to build. It then plans it out and launches several agents. For instance, there's a research agent, a best practices agent, and a framework agent as well. What I really like about this is that it uses everything from sub agents to slash commands and integrates it all seamlessly. The framework researcher which researched the frameworks we were using such as Nex.js and fast API for the back end ran for 6 minutes and used around 50k tokens while conducting its research. What it does next is create a new work tree. Inside that it creates a new GitHub issue in AMD file. It compiles all the research there detailing which files need to be created. Essentially, it pre-engineers the context. Based on this documentation and the outlined features, it creates phases for us. Each phase contains everything that needs to be set up for both the backend and front end. Right now, it has created a total of six phases, each with all the necessary components for execution. This completes the plan part of the workflow. Before continuing further, let me explain how to install it. This involves just two steps. You need to run Claude and add this marketplace plugin. For those wondering what this plugin is, it's actually the new feature recently released by Claude Code. You can now customize Claude code with different plugins. Plugins essentially serve as collections of /comands, sub aents, MCP servers, and hooks. You can create a plug-in and share it with your team or anyone else. This is really useful because it provides a standardized way of sharing configurations. First, run this command to add the marketplace. Then, to install the actual plug-in, run this command. As you can see, I already have it installed. The next phase in this workflow is using delegate. Delegate takes the tasks that have been created and executes them one by one using the work command. When I ran the work command, it first created a separate git work tree for the repo manager and then started working within that. When you first run the plan command, a detailed GitHub issue is generated. It breaks everything down and makes all the architecture decisions. Right after running the plan command, it doesn't actually write the details into markdown files. Instead, it writes them inside the agents to-do tool and sets them up. There you can see everything has been planned. Phase 1, phase 2, all the way up to phase 6. Each one is written as a task inside the to-do list. To show you all this, I had to exit and then resume my session so that the compactation feature wouldn't erase the data. This allowed me to actually show what it had done. It went ahead and executed all the phases one by one, setting up the database, backend, and front end within these tasks. Once a phase was completed, it paused and displayed what it had achieved for that phase. This allowed me to review each phase before moving forward. An interesting thing I noticed is that even after compacting, the to-dos persisted across sessions. For instance, in the snippet I showed earlier, phase one was only half completed before I ended the session. The rest was completed afterward. This persistence happens because the to-do items remain stored even when the session is compacted. In a way, this approach makes sense. Instead of keeping everything in separate markdown files, it lists them directly in the to-do tool. While the GitHub issue file serves as the context base, it also continues editing that issue as each phase completes. What I usually do before implementing a new phase is run the context command from claude code. This helps me visualize how much context remains before reaching the buffer limit, ensuring I don't need to compact midworkflow. Running one phase across two separate compacted sessions isn't ideal for implementation. So this check helps maintain continuity. This is what it actually came up with. If I go in, you can see that our repositories have been added. There are still some errors in here and the framework handles that as well. I did realize one thing. Because of the lack of proper requirements, some features weren't implemented correctly. For example, this export button exports all the repos combined into one MD file. What I actually wanted was for it to extract data from each repository individually. That was a requirements mistake on my part. I was also thinking of adding the git ingest library here directly since it's an open-source tool that can be added anywhere. It converts a git repository into an LLM readable format, which is how I actually dove deeper into this project. I converted the compound engineering repo into an LLM readable format and gave it to Claude so I could ask detailed questions about how everything worked and demonstrated it properly. It's a really neat approach. The next step is the assess phase which is basically about reviewing whether the code written is correct or not. When you run the review, it triggers another custom command with custom context. As it runs, it verifies everything that was created and then fires up multiple sub aents. Kieran, the author of this repo, has created several of his own custom agents for this process. He has implemented agents for Python, TypeScript, and even Rails. These agents run in parallel, each reviewing specific aspects such as language specific correctness, security, performance, architecture, and data integrity. Once the review is complete, the agents list all the critical issues they find organized by priority. After this, you run the triage command, which takes all the identified issues and asks whether you want to implement them or not. If you choose to fix them, it adds them to a to-dos folder, and each issue is listed individually inside markdown files. This part really surprised me. The issues themselves are properly documented in MD files, but the implementation tasks are not. Each issue file contains a detailed breakdown of what was wrong and how to fix it. This is where the framework comes full circle. After you're done with the review, it moves into the codify step, recording all the learnings inside those issue files. Then it loops back to plan and delegate, allowing you to iterate continuously. If you want to add new features, you can do that, too. And the cycle starts all over again. After review, there's a resolve command which enables parallel execution. When you run it, it helps resolve the to-dos simultaneously to save time. It analyzes all the to-dos and automatically builds a dependency graph to determine what can be done in parallel and what can't. One thing I really like about this is that all the work happens inside work trees, meaning everything runs in isolated environments. Each new feature you add creates a new work tree, and every feature also raises a new GitHub issue. This makes it especially useful for adding new features to existing code bases, provided you have a well-consolidated context of the codebase. Another thing I discovered while working with this framework is that naming conventions are extremely important when you're dealing with multiple components and pages. Clear and consistent naming really makes a difference. We'll be dropping another video soon on best context engineering conventions with cloud code, so stay tuned for that. Once everything's done, it commits your changes to the GitHub repo online, and that's how your project gets built. If you're wondering how this differs from the BMAD method, the difference lies mainly in approach. The BMAD method is much more thorough and heavily focused on planning. For example, the export command error I mentioned earlier wouldn't have happened if I had followed BMAD. However, BMAD can feel too detailed and time-conuming for smaller projects. While this approach is far more efficient, partly because of the built-in to-do system inside Claude Code. Other than that, I don't think it offers anything drastically new or groundbreaking. Its biggest strength is that it's a plug-in, easy to set up, and convenient to use. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next
Summary
This video introduces an agentic framework for AI coding using Claude Code's new plugin system, which enables structured, iterative development through plan, delegate, assess, and codify phases with persistent task management and agent-driven code generation.
Key Points
- The framework leverages Claude Code's marketplace plugins to create a customizable, agentic coding workflow.
- It follows a structured process: plan (with agents and work trees), delegate (executing phases), assess (reviewing code with specialized agents), and codify (recording learnings).
- Each phase generates a GitHub issue and uses a to-do list for task persistence across sessions.
- The system uses sub-agents for language-specific review (Python, TypeScript, Rails) and automatically builds dependency graphs for parallel execution.
- Work is isolated in separate work trees per feature, enabling safe iteration and integration into existing codebases.
- The approach emphasizes context engineering and naming conventions for maintainability.
- It contrasts with the more detailed BMAD method by offering a faster, more efficient workflow suitable for smaller projects.
- The framework records learnings in markdown files, enabling continuous iteration and improvement.
Key Takeaways
- Use Claude Code's marketplace plugins to build customizable AI coding workflows with structured phases.
- Leverage agent-driven planning and review to automate code generation and quality assurance.
- Implement persistent to-do lists and work trees to maintain context and enable session continuity.
- Apply the plan-delegate-assess-codify loop to iteratively improve code and add features efficiently.
- Prioritize clear naming conventions and context management for maintainable AI-generated code.