How to Actually Deliver AI Projects (APIs, Hosting & Handover Explained)
Processing Error
Summarization failed: HTTP error: 400
So, you guys are always asking me, Nate, how do I host workflows? What does the handover process look like? Should I host a workflow or the client? What about security? These types of questions come up all the time. And since there's literally no good video covering all of it, I figured I'd make one. So, in this video, I'm going to walk you guys through the entire process of fulfilling an AI workflow or agent after a client pays you step by step. And at the end, I'll show you a real example from one of the first AI workflows that I ever sold. Let's get into it. So, before you build anything, the first question that you have to answer is simple. Who is going to host the workflow? I get this question constantly. And since most of you guys are using nitn, I'm going to frame this answer around how nitn actually works because its license is actually kind of what decides what you can and cannot do. I've also worked with tons of different clients and I've delivered workflows to them in tons of different ways. So I basically just charge for the JSON file and then I would give that to them as well as a Loom setup guide of how it works. I've had them invite me to their own end account and I would just develop right there in their own environment. I've developed the workflow in my own environment and then they would give me their login credentials and I would log in, import everything and then I would just log out. And I've also helped people spin up their own instance of Nadn and then they would invite me once they have the account as a team member. So I say all that to say that there's a lot of different ways to do it. So let me tell you about the main options and what I recommend. So the key rule that you need to know is that you can only use nitn for your own business internally unless you have a paid commercial or enterprise license if you're hosting it. You are allowed to sell services around nitn but you are not allowed to turn nitn into your own product. So everything about hosting comes down to one thing. Is this NAND instance being used by one business internally or are you exposing it as a platform to other people? So now that we've gone over that rule, we have three options for hosting workflows. And honestly, I would pretty much always go with the first option and that's what I'm recommending to you all. But I'll still touch on the other two because it's still important to understand the difference. All right, so option number one is where the client hosts NAD. This is the safest and cleanest model for almost everyone. This means that each client that you're working with would have their own naden instance and you simply work inside of it. So what this means is that the client either buys their own end cloud and invites you as a user or they sign up on a self-hosted instance or locally hosted instance and then they give you access to it. Now I know that as a service provider you want to make their experience as frictionless as possible. So what you can do is you can help them set it up. You can help them configure their server or even provision an account for them but then they have to own it and they have to pay for it because you can't mark up the hosting or essentially charge for access to nodn. This helps you stay compliant because naden is being used for that company's internal business processes. you are just providing consulting and workflow development which is completely allowed and you are not giving multiple clients access to the same instance. This is basically the zap year model where every client has their own seat and you're the builder. So if you're doing client work, this is what I recommend pretty much all the time. Option two is you host nitn but only for your own agency. So in this setup you run nitn on your own server but nobody else sees it except for your business. And this is for your own internal automations like lead routing, content workflows, internal AI agents or anything else where the client never touches noden directly or ever needs to. Now you can also deliver a service using niden running on your own infrastructure as long as clients don't log in, don't connect their own API keys, things like that. This is compliance because niden is only powering your own operations. You're not exposing it as a hosted platform and you're not giving clients access to it. So just think of it as your own internal engine. And an example of this could be if you are sending a client something like an automated report or research and as a service to them, that's what you're basically delivering. Even if your own internal automation is powering that on your own hosted NDN, that's fine because they are just paying for that deliverable. And then option three is when you host NN as the product and this is where you would need a commercial or enterprise type of license. This is the line that you cannot cross on the free or the sustainable use license. So essentially you cannot build a SAS product software as a service product where NN is the value. Even if the client never sees the NN UI if your offer is basically give me your credentials and I'll run your automations on my NN server. That is not allowed without a commercial agreement. And this model only really makes sense if you're building out a SAS or you're selling automation as a subscription where ended is clearly the engine. If that is the case, you definitely want to talk to end sales and get a commercial or enterprise license. And I will say those things are not cheap. So the short version is simple. If you're building workflows for clients, let them host it. If you're running automations for your own company, host it yourself. And if you want to build a SAS or platform, you need a commercial license. And once you've decided where the automation will actually live, then you can move into planning out the build, structuring the data, and preparing for the handover. So next, let's look at security and data protection. Once you've built the workflow, your job is to make sure the data moving through it stays secure. That means no leaking sensitive information, no exposing personal data, and no breaking privacy laws like GDPR. So let's go over how anen handles security and how you can explain this clearly to your clients. And in a few minutes, I'll talk more specifically about API key management and billing since that also deserves its own section. The first thing to understand is how noden protects sensitive fields inside of your automations. Credentials inside Eniden are encrypted at rest and they're decrypted in memory at the moment the workflow runs. So these nodes with your credentials in there simply reference credentials by name. And if a teammate or client doesn't have permission to that workspace that has those credentials, they won't be able to see the raw values, the API keys, the passwords, whatever you want to call them. This is why handing off a workflow is safe when it's done correctly because you're not exposing sensitive fields inside nodes and you're not storing anything in plain text. The platform is built to keep secrets locked away so only the automation engine can access them when it needs to. Another big part of security is web hook hardening. So a web hook is basically like a public door into a workflow, which means you need to treat it with the same seriousness as you would any inbound request to an application. So maybe this means using HTTPS so that data is encrypted in transit or signing secrets or verification tokens for services like Stripe, GitHub or any provider that supports signature validation. What you can do is have Eniden verify that signature before it actually trusts the payload and lets it into the workflow. And never put sensitive data inside the URL of a web hook. And if you really want to get advanced and a little more technical here, if the use case called for it, you could implement things like rate limits or additional authentication checks to prevent spam, brute forcing, or automated abuse. So the way I would explain this to clients is simple. Every external trigger hitting your automation must be authenticated so that only approved systems can talk to this workflow, which means random people cannot guess the URL and start hitting your CRM or your internal systems for data. And while we're on this topic, it's also worth thinking about prompting in some guard rails and building a system so that people can't jailbreak or prompt inject your AI agents. Now, another big responsibility is handling CRM, payment or other personal data because this information is often regulated under GDPR and similar laws. Anything that could identify a person is protected. So, this is not legal advice. However, there are a few basic best practices to follow, but you should always be consulting with professionals based on your industry and your laws that govern you and your clients. But you could do stuff like using data minimization, which means only bringing in the fields that you actually need. You could limit who can see the workflow runs so only the right people have access to the logs and the payloads. You should also understand that a client must have a legal basis for collecting and processing the data that they're passing into the workflow. And if you're processing data on the client's behalf, you're usually acting as a data processor, which means you may need a data processing agreement in place. You also need to make sure that your automations don't make it impossible for the client to honor requests like data deletion, data corrections, or access requests. Basically, what that means is you should know exactly where the data flows so that it can be removed or updated if needed. And it helps with this because you can prune executions, trim logs, and limit the amount of data that the system stores over time. And one of the biggest advantages of Naden is that it's source available, which is essentially better known as open source, which means that it can be fully self-hosted, which gives the client the option to keep all of their workflow data in their own environment. So you can run noden on their infrastructure, connect it to only tools that they approve and trust, and even use local or self-hosted AI models instead of sending data to OpenAI or other closed source proprietary models. This gives true data sovereignty. The client chooses where the server lives, how it's secured, and who has access. So for privacy sensitive clients, this is a huge selling point. Instead of pushing sensitive data through random cloud services, you can run the entire automation engine inside their own locked room, whether that's on prem or inside a private VPS. And that's the foundation of security and data privacy. Your job is to build workflows that move data safely, keep sensitive information protected, and give clients confidence that nothing is leaking. And now that you understand the security side, the next section I'm going to walk you guys through is about API key management and billing because that's another area where I get a ton of questions. So, the main ones I get are who owns the API keys and who pays for the usage. And the cleanest answer I can give you is that the client should always pay for their own API keys and usage. This keeps everything transparent, predictable, and it avoids a lot of headaches later. So the ideal setup is having the client sign up for the tool themselves, enter their billing information, generate the API key and paste it directly into Ended. When you do it this way, the key never gets sent over the internet to you and the client keeps full control over their account. They can see the usage, they can see the charges, they can turn it off if they want and nothing about the automation is hidden from them. It's just a much cleaner working relationship. Now, the best way to do this, because like I said, you want to remove as much friction as possible is to send them a Loom video walking them through exactly where to click and how to create that key and where to paste it into NN. Just keep it dead simple for them. You could even walk them through it on a Zoom call if they prefer. Now, could you set up the API accounts on your side and then just build them later? Yes, you could. There's nothing about that is non-compliance, but it could create all kinds of problems because they don't see the usage. They don't understand where their money is going. And if you're marking up usage or you're charging a fixed rate, it can get confusing fast as the automation scales or if something breaks or spikes. Even if it does feel easier right in the moment because the client doesn't actually have to go do anything, it definitely in the long run can create more questions than answers. So letting them own it and pay for it keeps everything clean. Now, if a client is intimidated by that process and wants you to handle the key directly, you still want to make sure you're transferring it securely. So just send them over Slack, ClickUp, text, or email. have them drop it into some sort of secure vault like a one password or any encrypted secret sharing tool where they can generate a onetime link. Then you can copy that key into nan yourself and that link can expire or something like that so that no one else could ever access that vault. As a small bonus feature, you could even think about offering them a dashboard that shows all their API keys and all the billing in one spot. That gives them visibility and it gives you credibility because you're helping them manage their system like a real piece of infrastructure. So the simple rule is clients own their API keys, clients pay for their usage and you make the process painless for them. This keeps everything clear, secure, and scalable as the automation stack grows and as your professional relationship matures. Now, once all that's been decided and you're actually starting to think about handing over the project, you have to make sure that it's been fully tested in the right way before you actually send it over. If you're not careful, the workflow may have bugs that you didn't spot before, and this can hurt your reputation and relationship with the client. So, the first step is planning your test data with the client. You don't want to test with madeup examples that have nothing to do with their business. So, early on, ask them for a small sample set that looks like real usage. I typically do this before we sign the contract so they know what is expected of them because if they delay getting you all that sample data then it's going to delay your process as well. So that could be emails, support tickets, transcripts, CRM records, whatever actually fits the workflow when it's in production. And of course, if needed, they can anonymize it. Then you can agree on what success looks like and what is a good output and what must never happen. Things like wrong tags, broken links, leaking info, or sending the wrong person the wrong message. So you can explain it very simply before we go live. We're going to run your real examples through the system so that you can see exactly how it behaves. And then inside your own testing, you want to think less like a developer clicking in every node and looking at the configuration and think more like an engineer who's planning for failure. So with automations, especially when they have AI, you have to accept that you don't know what you don't know. And once the system goes into production, real users and real data will always reveal edge cases that you didn't think about. So during testing, you should intentionally look for worst case scenarios and ask yourself, what happens if this? What happens if that? bad data, no data, duplicate data, or even something completely unexpected. Instead of assuming the workflow will run smoothly forever, you want to build in guardrails. Maybe the workflow can time out gracefully so nothing happens. Maybe you set up an error workflow that alerts the team. Maybe you log all failures into a Google sheet so you can track patterns over time. The idea is not to eliminate every possible issue because you can't, but to make sure that when something does break, it breaks safely and quietly and in a way that gives you enough information to go fix it fast. So once that works, you step back and you treat the whole workflow like a blackbox. You feed in a lot of examples, not just one or two, ideally dozens or even hundreds of sample inputs if you can get them. And then for each one, you log what came in, what happened in the middle, and what the final output was. Then you compare those outputs to the success criteria that you agreed on with the client. You flag the failures, the weird edge cases, and borderline results that you want to talk about with them. And this is your internal QA or quality assurance pass. The goal is to catch as much as possible before the client ever tries it. And I would do internal QA for at least a few days before having them get in there and provide any feedback. Now AI adds another layer on top of that because you're not just checking that the AI node runs, you're checking the quality of what it says. So for beginners, focus on a few simple checks like relevance and correctness. Does the answer actually respond to the request with accurate information? Another thing is tone and safety. It shouldn't be toxic, off-brand, or leaking hidden system prompts and private info. And then you've got the element of consistency. So if you send in the same 10 inputs, are you getting roughly the same 10 answers every time? And behind the scenes, you can run simple AB tests and evaluations where you try different prompts and different models on the same data set and track which ones give you the best results. And to the client, you can phrase that like, we tested several prompts and models on your real examples and kept the one that hit the highest quality and the most consistency. Here's the evaluation data we ran. And you could even use Neden's built-in evaluation feature. Of course, you have to be able to actually show all that. So that's where logging comes in. And logging is what makes all of this not feel like magic because I like to have my workflow store execution history in a Google sheet where it tracks all the inputs, all the outputs, the tool calls, the errors, and tokens. So that you can actually look through this log and you can identify patterns, common failure types, recurring bad inputs or weak spots in your prompt or model choice. That same log becomes your evidence when you talk to the client and you can show them what you tested and why you made certain decisions or improvements. And after you're happy internally with the system, you can move to the client-f facing QA where you give them a clear way to test the system. That might be a chat box or a form or a simple UI. Just make it simple. You don't want them to actually have to get into Nen and look at all of that mess. Then you ask them to tell you what they think about the system, about the outputs, the tone, the formatting, anything like that. And a lot of times at this point, if you did everything right, you will just be doing little prompt tuning and model tuning tweaks. And then to wrap up, you can record a short update video, show one or two full runs from the input to workflow to output, point to the logs, and explain how the system handled those real examples. And that is the kind of QA or quality assurance that builds trust and makes clients want to work with you again. Now, once you've tested the workflow successfully, it's time to start the handover process. This is the delivery phase. and it looks a little different depending on your current situation. So the first thing that could affect handover is whether or not you built the workflow directly in their environment because if you did the handover is a lot easier because everything's already set up and their credentials and everything like that. But if you didn't then you'll have a bigger transfer process where you actually help them move credentials and you have to recreate some of those connections. The second thing that can affect your handover is whether or not this is the end of the project or if you've already scoped out more work or some sort of ongoing maintenance retainer. If you're sticking around, you might keep some testing infrastructure in place, but if you're not, then your handover needs to be more final and complete. There are a few key steps you want to follow either way. First, you want to duplicate the workflow. So, keep one version somewhere as a backup or a testing version and then push the clean one into production. This is exactly how software teams work. You kind of have a test environment where you experiment and iterate and then you have a production environment where only a stable version lives. So, anytime that you need to make changes or updates, you do that on the test version first, confirm that everything still works and then you can move that version into production. And that same idea can apply to when Naden itself releases updates or when the tools or integrations change. You never want to just update your production environment blindly. You want to update the test setup first. Load the workflow. Make sure all the functionality is still intact. And then once you know it still behaves as it should, then you can update the version that the client would actually rely on. This avoids outages, broken automations, and a lot of unnecessary stress. On top of that, you always want to back up your workflows. So you can store the exported JSON on GitHub, Google Drive, or even a simple Google sheet, so you always have previous versions that you can revert to if needed. And if you want to go a step further, you can build an automated backup process using NNN itself so that it periodically exports and saves the workflows somewhere else. The next thing you want to do is about workflow hygiene. So you want to make sure that it's clean and easy to understand. Use clear naming in each step. Label each step. Have sticky notes around the workflow explaining what it's doing and why you built it that way. The goal is that anyone from their team or your team later on could open up that workflow or have a PDF of that workflow and see and understand the logic right away. You also want to double check that there are no sensitive keys or tokens anywhere in the workflow before you hand it over because you want a clean handoff where the client knows exactly where their API keys go and how to set them up on their own account. So in this set of deliverables, it could also have a Loom walkthrough where it's a quick one or two minute video where you show how the system works, how to configure it, and what to do if certain elements need updating because there's never one exact right solution for a process as far as like how to automate it. And everyone's brains work a little differently. So if you can explain to them what you were thinking when you built it, it's going to be really helpful. And whether this is the only project you're doing for them or if you're staying on retainer, good documentation is always valuable because like I said, if someone on their team later takes it over or if you bring in a developer later to help maintain the account, everything is super clear and no one has to guess what was built or why. And it really protects both sides because it helps the client feel supported and confident. And it helps you avoid being the bottleneck whenever something needs to change. And this is how you deliver workflows professionally and set yourself up for a long-term relationship instead of one-off projects. Now, this is the part that people never really discuss online, and that's the legal and financial side of things. After handing over the workflow, there are a few things you want to make sure are agreed on in advance. And the first one is billing. You want to be crystal clear on this. First, you close out the current project and get paid for what you just built. Then, you decide if there will be an ongoing paid relationship to keep everything healthy over time through a maintenance retainer. Before anything else, you want to revisit the scope of work that both of you guys agreed on at the start. Your contract or scope of work should already say what finished means, which workflows you promised to build, which systems they would connect to, what success looks like. essentially what the definition of done is because at handover you want to walk through each item and confirm that everything works the way that you agreed and once the client confirms that the project is complete you send the final invoice. You can frame this as simple as the project ends when the agreed workflows are live, tested, documented and accepted and then the project invoice is due. The next piece is deciding if you want to offer a maintenance retainer. A retainer is a separate ongoing agreement where the client pays you to keep the system up and running. This usually covers things like bug fixes, small tweaks, updates, dependency changes, monitoring, and basic security checks. It does not cover new features, new workflows, or major scope changes. Those should be a separate project. You can also set basic service levels so clients know what to expect. For example, a critical outage might get a response within a few hours. Minor requests might be handled within a few days. These don't have to be complicated, but the expectations there should be clear. Now, you also want to get clarity on ownership and IP. Many consulting agreements say that the client owns the work product once paid, but you should still protect yourself so that you can keep the right to reuse generic patterns or components that are not specific to their business, such as, like I said, reusable tools or subworkflows or basic templates. It also helps to define the exit process. So, if the client ever wants to move away from your services, you should outline what you will hand over. This could include exported workflows, documentation, and handover call along with what is included and what is billable. So, a simple explanation of this is once the product is paid for, the client has the right to use and run these workflows in their business. If they later want to move providers, you will help handle everything off in a structured way. For beginners, the main thing is simple. Stop doing all of this informally. Put scope, definition of done, payment, maintenance, service levels, bugs versus changes, ownership, and exit terms into a clear written agreement. When both sides know what they are buying and what happens after going live, projects run smoother and you avoid miscommunication. So now that you know what to do in theory, let's take a look at a real life example of a workflow that I sold and the process I went through when handing it over. So this one was a personal assistant workflow and one of the first ones that I ever delivered and the client had never actually heard of NN, he just watched my YouTube video of the ultimate assistant and then reached out and said that he wanted something like that. So after discovery and after signing the contract, we got on a kickoff call. And on that call, I had all of these things that I needed from him, which were listed out in sort of like our client expectations portion of the contract. So I walked him through exactly what we needed to get and I helped him go get those API keys, sign up for an end account, things like that. And then I showed him how to invite me to that ended instance. Right there on the call, we connected his CRM, his calendar, his email, and the data sources that he wanted this assistant to be able to use. From there, I was able to just plug in a few of my own credentials for testing purposes and then I could just hit the ground running. And the best part about this was at the end, handover was almost instant because all I had to do was swap out a few of his credentials for mine and then he could just start using it right away and giving me feedback. Now, with something like a personal assistant that is super autonomous, QA can be intense because it's super conversational and there's lots of different tools that it could call. It's also client facing. It has memory. It has tone. It has lots of things that you need to actually make sure that the client is happy with. So, there was a lot of back and forth. There was a lot of tweaks, refinements, and specifically with the system prompt, and that's completely normal. But throughout this process, he began asking for bigger features, new integrations. And at that point, I had to protect the scope because that's super important to make sure that we're not adding in all the stuff that you're not getting paid for. So I told him which of those requests would fit inside version one and which ones would need to be added to the backlog for a future phase. And after version one was complete and accepted, we would scope out a new project around those extra features. So that alone saved me from doing a ton of unpaid work. And the final lesson here ties back to API keys. Early on, I used to try to make things easy for clients by running everything under my own billing and just sending them an invoice at the end of the month. Now, it sounds nice in theory, but in reality, it gets messy fast. Like I mentioned earlier, clients want predictable costs, and token usage is impossible to estimate perfectly. You also could deal with late invoices or confusion about what they're actually paying for, and it all just comes back to the same rule. It's so much cleaner and simpler and way more scalable if they own those accounts and their keys from the start. It makes the handover easier. It makes maintenance easier, and it keeps you out of the billing babysitter role. So hopefully seeing a real example gives you a better sense of how all the pieces come together in a live client project. This is how you build, host, test, hand over, and maintain AI workflows without creating headaches for you or the client. So that's the full process. First, you decide where the workflow is going to live. I always recommend that clients self-host everything, whether that's ended in cloud, a VPS, or something local, and you just help them configure it. Before you deliver anything, you make sure security and data privacy are handled the right way. Then you figure out who owns what with API keys and how those keys get into the system without creating a mess. After that, you run your testing and QA so you know that the workflow is reliable, safe, and producing the right outputs. Then you move into the actual handover which is how you deliver the system. Set expectations and give them documentation. And finally, you close out the project on the legal and billing side and decide if there will be ongoing maintenance after go live. That's the full life cycle of building and delivering AI workflows the right way. So I know that we covered a lot of information in this video. So what I've done is I've thrown all of this into a full resource guide that you guys can access for completely free. All you have to do is join my free school community. The link for that is down in the description. If you enjoyed this one and you want to dive even deeper, then definitely check out my plus community. We've got over 3,000 members in there who are building businesses with NN every single day. So, it's a great environment to surround yourself with like-minded people. So, that's going to do it. If you learned something new, please give it a like and subscribe. It definitely helps me out a ton. And let me know what else you guys want to see in the comments. As always, I appreciate you guys making it to the end of the video. I'll see you in the next one. Thanks everyone.
Summary not available
Annotations not available