The Greatest Problem With AI Coding Is Solved

AILABS-393 3FZIdRZsUMM Watch on YouTube Published January 19, 2026
Scored
Duration
11:30
Views
15,759
Likes
495

Scores

Composite
0.72
Freshness
0.15
Quality
0.91
Relevance
1.00
2,339 words Language: en Auto-generated

We are in a new era of software development. Developers are shipping products at a speed we have never seen before. However, a problem has emerged. Traditional workflows do not hold up when agents are involved. This raises an important question. What does the developer role look like now? A recent article by the CEO of Linear caught my attention. Linear is a project management tool that helps teams organize and track their work specifically for modern software development. These insights come from someone who has lived through the transition from traditional workflows to the AIdriven systems of today. This article made me rethink more than just the tools we use. It made me rethink how we build products entirely. We've got a lot to talk about today because this information fundamentally changes how we build with AI. The middle of software work disappearing and the center of software is actually moving. To understand what the middle is, let us look at how work was divided before AI development. It started with the beginning phase. This included all of the requirements gathering and planning phases. In this phase, we created plans for what we were going to build. Then came the middle. This was where we converted the plan to the actual product and it was the part where writing the code was involved. This was the part that took the most time of all. It took weeks, months or even a year to deliver a quality fully working setup. This was also the part where details got mixed up the most because of translating intent or conveying ideas from one person to another. After the code was written, the end part included various forms of testing and reviewing against the original requirements. The middle was the part that contained the most friction. But the CEO says that is no longer going to be the case. It is because the middle work, the implementation and the coding part is actually being replaced by AI. Now we don't have to touch code ourselves at all. This is because coding agents have become so powerful that they are able to produce code from context and task planning alone. It is now more about using the agents the right way and supervising their work than writing code. If you've been watching our videos regularly, we have taught and demonstrated many different ways you can use coding workflows to produce production level apps. You can do this by just supervising the agents without having to code a single line yourself. Idees have become more of a code viewer than a writing tool. This change is really apparent to me because as a developer, my go-to tool for writing code has now become a tool for reviewing the code the agent produces. Now I just go to VS Code to review or add comments. So the AI agent can implement the commented features. I very rarely have to change anything or write code myself now because agents are highly capable. But this only works if the agents are able to understand the intent. Therefore, our work as developers has essentially shifted from writing code to supervising it. You've probably noticed we build a lot in these videos. All the prompts, the templates, the stuff you'd normally have to pause and copy from the screen. We've put it all in one place. We've recently launched AIABS Pro, where you get access to everything from this video and every video before it. If you found value in what we do and want to support the channel, this is the best way to do it. Links in the description. Since AI has taken over most of the coding work, this leads to a question. What is left for us? The answer is focusing on the new craft of refining the intentions of what to build. The way you can do that is by treating planning as your primary job. You need to clearly understand the problem you are trying to solve. You need to know what your customer actually wants and how people will use your app. This has become even more important now. You are no longer relying on humans who can interpret intentions from poor planning. Instead, you are relying on AI agents that blindly implement whatever you instruct them to do. Whether you are building a mobile app or a web app, you need to know exactly what you want to build. Without that clarity, you cannot do meaningful planning with the agents planning modes. Planning is vital. As we have emphasized in our previous videos, only good plans lead to good implementation. It does not matter which agent you are using. Planning is very important because it controls the outcome of the agent. Take as long as you need. Keep refining the plan until it fully satisfies your needs and meets your expectations. This will ensure that your app turns out the way you want. Until 3 months ago, we never relied on bypass permission mode for building because agents used to hallucinate despite a good plan. Now the agents are so reliable that after refining the plan I just turn the bypass permission mode on and let the agent implement the specs in a single run. We also saw that even the creator of clawed code starts his implementations with plan mode. If the plan is good enough, you can let the agents build the app in one shot without worrying about messy implementations. I also spend a significant amount of time making sure that what I am building is fully documented. I do not cram it all into a single document so the agent can navigate through the plans easily. I use different documents for each category such as risk assessments, mitigation, and tech specs. I list constraints and trade-offs in a separate document. This is how the agent understands what is acceptable in terms of performance, cost, and time. This approach leads to much more controlled development. After all the requirements have been verified, the next step is to actually manage the agent and get what we want. But before we talk about that, here's a quick word from our sponsor, Dart AI. Managing complex software projects often involves more administrative overhead than actual coding. Dart is not just a standard project management tool. It is an AI native workspace designed to automate busy work for developers. With the contextaware AI chat, you can even create tasks and edit documents just by talking naturally. Beyond AI chat, you can even onboard agents like cursor to execute work. Dart gives them the context to actually write your code. The real power lies in its AI guidelines feature. You can configure global rules like instructing the AI to always format technical specs with specific goals and requirements headers and Dart enforces this structure across every chat task and document it generates. For us, the AI skills feature is a gamecher. You can define custom commands like a generate project skill that automatically creates a populated task list, assigns priorities, estimates sizing, and drafts a project brief in seconds. Start automating your project management today by checking out Dart AI at the link in the pinned comment. You are no longer just a coder. Your work is more centered around supervising agents than actually writing code. Writing code has become less about constructing a solution and more about setting up the conditions for a good solution to emerge. So how do you create the right environment for agents to produce quality outcomes? The answer is context engineering. The next big skill you need to learn is not a specific web development stack like MER or me. Instead, it is context management. We've consistently seen that without proper context management, it does implement the features we prompted to, but it doesn't follow any constraints or rules it had to match the implementation to. We need to ensure the context is managed properly. When the agent is given the right information with minimal noise, it understands the task more clearly. It produces better implementations and delivers exactly what you want. Managing the context involves using a set of components like reusable commands, skills, markdown files, MCPs, and sub aents. There is no single right way to do it. You should use multiple methods that work well for what you are trying to build. You need to create a workflow that suits your project. We have dedicated an entire video demonstrating how you can build workflows with context management. This ensures the model you are using gets the right context and can produce highquality applications. If you want to follow along, all the resources for that video are available in AI Labs Pro. An agent's work is only as good as the contextdriven environment it operates in. The more it is connected directly to customer feedback and supported by a structured workflow, the better it can perform. We need to create such an environment because it does not happen automatically. For this reason, Claude has connectivity with Slack so that teams can directly report errors. This creates valuable feedback loops which even the creator of Claude code himself use. Large teams are already producing highquality AI generated code. The creator of Claude code claimed that in the past month 100% of his contributions were effectively written by Claude code itself. This does not happen just by giving it a prompt. It requires a set of workflows and orchestrated patterns to make it possible. Even the CEO of Microsoft admits that AI now generates 20% to 30% of Microsoft's integrated code across all languages. There is especially notable progress in Python and C++. Structure in tools works the same way for both humans and agents. It reduces uncertainty by clearly defining what is expected and what capabilities exist. If you are using AI agents without structure, you are only using a fraction of their potential. The structure can take many forms. This includes a claude.md file for overall project guidance and a change log to track changes. You can also use reusable/comands or specialized skill.md files with scripts and references. Additionally, you can use plugins and MCP tools to extend the agents capabilities. But knowing these tools is not enough. The right combination matters. Every project requires a different setup. So, you have to build one based on your project's needs. With the right balance, you will get results just the way you want. Our job is not done after planning and delegating tasks to agents. Now as I mentioned that I let claude code work on dangerously skip permission mode. It does save a lot of time but it requires our time and attention towards something else. The pressure shifts toward the end of the cycle. Reviewing the code becomes more important. Code that is not reviewed can lead to degraded performance and high costs. You can use structured workflows to make reviewing easier. This will lead to fewer bugs and save you from issues later on. Now testing is not just going to your agent and saying test my app for all the issues. It involves several approaches to improve the process. One method is test-driven development. We ask the agent to write test cases for the feature we want to implement without writing any code initially. Once the tests are written, I clear the context and start a new window. This ensures the agent loses context on how it wrote the tests. I ask Claude to run the tests and they fail because no code has been written yet. Now that I know the tests are working correctly, I ask Claude to implement the route. I ensure that it does not modify the tests. This way the agent has a clear goal to iterate toward. In TDD tests are written before the code but testing should also happen after the code is written. For that purpose there are many forms of testing. I use blackbox testing and create user stories. These act as detailed guides on how users will actually interact with the system and how those interactions might trigger errors. Blackbox testing evaluates the functionality of an application based on requirements without looking at the code itself. I then use the clawed chrome extension to perform the testing and ask it to iterate over each user story section by section. Blackbox testing mainly identifies functionality issues. For performance testing, we also need whitebox testing. This is where we actually look at the code, not just the output. We trace how the code is implemented and reason about its architecture. For whitebox testing, I used an XML document containing multiple sections and subsections of tests. This document acts as a guide for Claude on how to navigate through written code and how to find architectural issues. To simplify my testing, I used a custom command that executes the tests in the document which I placed in the testing folder. This command lists the instructions for initializing the tests, how to log the results into a file in a structured format and at the end how to generate a final report. This slash command made white box testing easy for me because it contains the structured prompt for testing. Since the middle is disappearing and the focus is shifting more toward the beginning and the end, we need to rethink our priorities. What we need to prioritize now is forming the right intent through planning and requirement assessment. We must also ensure that the outcome meets expectations through thorough testing and review processes. Those developers who master these principles will be the ones leading the future. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next one.

Summary

The video discusses how AI coding agents are transforming software development by eliminating the traditional 'middle' of coding, shifting developers' roles from writing code to planning, supervising, and testing AI-generated implementations.

Key Points

  • The traditional software development workflow is changing, with AI agents now handling the implementation (middle) phase.
  • Developers' roles are shifting from coding to planning, supervising, and reviewing AI-generated code.
  • Effective planning is crucial, as AI agents implement whatever instructions they receive, making intent clarity essential.
  • Context engineering is now a core skill, involving reusable commands, markdown files, MCPs, and structured workflows to guide agents.
  • Testing must be more rigorous, including test-driven development, blackbox testing, and whitebox testing to ensure quality.
  • Tools like Dart AI help automate project management and improve agent performance through AI-guided workflows.
  • The CEO of Microsoft and other developers report significant AI-generated code contributions, showing the shift is already happening.
  • Developers must master intent refinement, context management, and structured testing to succeed in the AI-driven development era.

Key Takeaways

  • Focus on planning and intent clarity to guide AI agents effectively.
  • Use structured workflows and context management to improve AI agent output quality.
  • Implement both test-driven and post-implementation testing to ensure code reliability.
  • Leverage AI-native tools like Dart AI to automate project management and agent supervision.
  • Master context engineering with reusable commands, markdown files, and MCPs to control AI outputs.

Primary Category

AI Agents

Secondary Categories

AI Tools & Frameworks Programming & Development AI Engineering

Topics

AI coding coding agents context engineering planning mode test-driven development review processes vibe coding AI workflow code implementation developer role

Entities

people
CEO of Linear creator of Claude Code
organizations
Linear Microsoft
products
Claude Code Google AI Studio Dart AI VS Code Cursor Claude
technologies
AI coding agents LLMs MCPs sub agents Markdown files test-driven development blackbox testing whitebox testing
domain_specific

Sentiment

0.85 (Positive)

Content Type

tutorial

Difficulty

intermediate

Tone

educational technical inspiring promotional