Inngest's New Tool Is Insane For AI Coding
Scores
With the way that AI coding is going, so many things are becoming automated. What wrong with another thing going out of our hands? LLM's got tools and just like that, so much of what humans did was automated. With Puppeteer MCP, we saw automated UI testing. Now, Ingest just gave us a monitoring layer that lets your coding agents become live debuggers of the code they generate. They're doing this by releasing their MCP for the Injest dev server, which is basically a local version of their cloud platform. The platform lets you test all the functions you've built inside your agent and provides a visual interface for everything along with the different events that run. With this, you can directly ask your AI agents like cloud code or cursor to do all the automated testing. If Versel had something like this, their deployment and debugging would only require a single prompt. For those who don't know, Injest is an open-source workflow orchestration platform that lets you build reliable AI workflows and takes care of so many problems that come with it. I've been using it to build agentic workflows in our company and the developer experience is really good with the MCP server. It's gotten even better. These workflows are built with async functions and there are some problems with testing and debugging them. Most of them are triggered by external events. They run asynchronously with multiple steps. For those of you who don't know what asynchronous means, these are functions that can pause and wait for something to finish and then continue without blocking everything else. These functions are part of larger workflows which makes debugging even harder. This usually leads you to manually trigger these events or you might need to continuously switch between your code editor and your browser from time to time. You might even have to dig through the logs to understand what actually happened with that single function or why it might have failed or anything else. Or you might even need to recreate complex events or trigger them yourself to actually test the function. But now with the MCP integration, your AI agent can handle all of this automatically. They also had this context engineering in practice paper where they explained how they actually built an AI research agent. I'll be using this agent to show how the MCP works. In the agent, they implemented context engineering inside it rather than using it to just build it both in its context retrieval phase and its context enrichment phase. They also explain the difference between context pushing and context pulling really well. It's a really interesting article as well and I might be making a video on this. So if you're interested in that, do comment below. The agent is completely open- source. I copied the link, cloned it, installed the dependencies and initialized claude code. I had it analyze the codebase and create the claude.md. The article also specifies why we should use different models for their different strengths and they've implemented agents with separate LLMs for different roles in the research agent. They're using the AI gateway with Versal which gives you access to 100 plus models. I wanted to use a single model using the claude.md. It updated the codebase and switched it to use OpenAI's API. After editing, it just told me which files it had changed. After that, I copied the configuration for claude code, created a MCP.json file, pasted it in, started the next.js app, and then started the ingest dev server, which you've already seen. After that, I restarted cloud code and checked that the MCP was connected. Inside the MCP, you have event management where it can basically trigger functions with test events and get run ids along with other functions that allow it to list and invoke functions as well. You have monitoring tools which allow it to get the status and documentation access too. So, if something does go wrong with the ingest functions, I no longer have to dig around manually to find out what's wrong with my agent. These tools can automatically tell Claude what went wrong and it can fix it for me. It used the send event tool to query the main research function with the question, what is context engineering. After that, it pulled the run status, which basically means it asked over and over again whether the run was complete or not. Then it tested it again and saw that all of them were using the correct model name and the workflow was still executing nicely. In their own words, this represents a fundamental shift in how they're building and debugging serverless functions. Instead of functions being black boxes that the AI model just reads from the outside, AI can now work in the proper execution and provide real-time insight. And hopefully we'll see this happening with other tools as well where we're giving AI more autonomy. And I'm pretty excited for it. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next one.
Summary
Inngest's new MCP for the Ingest dev server enables AI agents to automatically test, debug, and monitor asynchronous workflows, offering real-time insight and reducing manual debugging efforts.
Key Points
- Inngest has released a local dev server with MCP integration, allowing AI agents to monitor and debug code they generate.
- The tool enables automated testing and debugging of asynchronous functions triggered by external events.
- It provides a visual interface to manage events, view run statuses, and access documentation for workflows.
- AI agents can now self-diagnose issues and fix them automatically using tools like send event and status polling.
- The system supports context engineering and uses separate LLMs for different roles in AI workflows.
- This shift allows AI to work inside the execution environment rather than just observing it from outside.
- The approach reduces the need for manual log digging and event recreation during debugging.
- The agent used in the demo is open-source and built with async functions, making debugging traditionally difficult.
- The workflow includes context retrieval and enrichment phases, with a focus on model-specific strengths.
- Users can use Versal's AI gateway to access over 100 models, though the demo uses OpenAI's API.
Key Takeaways
- Use Inngest's dev server with MCP to enable AI agents to automatically test and debug asynchronous workflows.
- Leverage event management and monitoring tools to track function execution and diagnose failures in real time.
- Implement context engineering in AI agents to improve accuracy and decision-making during research tasks.
- Separate LLMs by role to optimize performance based on model strengths for different tasks.
- Adopt a workflow architecture that allows AI to operate inside the execution environment for better debugging and autonomy.