The project had been in discovery for three weeks when we ran the kickoff. The founder had described the product as a “smart onboarding assistant.” During the kickoff, forty minutes in, the operations manager mentioned offhand that new users were churning because they couldn’t find a specific configuration step buried in the third screen of a six-screen setup flow.
That was the actual problem. Not an AI assistant. A broken onboarding funnel.
We would have built the wrong thing. The kickoff saved us.
That’s what a good kickoff does: it surfaces the real problem before anyone writes a line of code. I’ve run these meetings for 20+ AI builds now, and the ones that went sideways almost always had something in common. Either we didn’t run a proper kickoff, or we ran one that felt like a formality.
Here’s what I actually do.
The Pre-Read I Send 24 Hours Before
Most kickoffs open with the client explaining their company and product from the beginning. I try to avoid this. Not because the context isn’t useful, but because the meeting is 90 minutes and I’d rather spend that time on requirements than introductions.
So I send four questions 24 hours before:
- What problem does this project solve, in one sentence?
- Who uses the product this AI feature will live in, and how often?
- What does a successful outcome look like six months after we ship?
- What data or systems will we need to connect to?
Clients who answer these honestly show up to the meeting with their thinking organized. Clients who don’t have clear answers yet arrive aware of the gaps, which is better than discovering the gaps live.
I also send a one-paragraph description of what we’ll cover, so no one’s surprised by what we’re asking. “We’ll spend 90 minutes getting clear on the problem, agreeing on success criteria, and mapping out dependencies. We’ll leave with a Sprint 0 document everyone can sign off on.”
No pre-read means 30 minutes of the kickoff becomes context-setting that could have happened async.
The Five-Part Agenda
I run every AI kickoff with the same five sections. The time splits flex based on what’s coming up in each section, but the order stays the same.
Problem context (20 min). I ask the founder or product lead to walk me through the problem from the user’s perspective. Not the technical solution they have in mind. The user’s problem. I ask: “When does a user first feel this problem? What do they do about it today?” The answer usually reveals whether the problem is real or assumed.
This is also where I check whether the project as described matches the problem. It often doesn’t match exactly. That’s fine. Better to know now.
Success criteria (20 min). This section determines whether the project will have a clear finish line or not. I ask three questions here that I’ve learned to ask in sequence:
First: “What would you need to see to feel confident enough to ship this to your users?” This opens up what “done” looks like in practice.
For more on this, read our guide on What Good AI Delivery Looks Like. Second: “What accuracy do you need before you’d show this to your users?” Most clients haven’t answered this before I ask. The first answer is usually “as high as possible.” I follow up: “If it was right 80% of the time, would that be good enough? What about 90%? 95%?” The number they land on tells you how hard the engineering problem is before you start.
Third: “What metric will you use, thirty days after launch, to know the project worked?” This forces a conversation about instrumentation. If there’s no metric, there’s no way to define success post-launch.
I write the answers to all three on a shared doc during the meeting. Everyone in the room can see them. By the end of this section, we have the core of the Sprint 0 document.
Data and integrations (20 min). AI projects fail on data more than anything else. “We have the data” is something clients say before they understand what the project actually needs. I’ve learned to dig here.
I ask: “Can you walk me through where the data lives today?” Not “do you have data?” but where it physically lives. This is what Google calls data validation in their MLOps framework — and in practice it’s what separates a prototype that works in a notebook from one that works in production. A CSV on someone’s laptop is different from a Postgres database with a live API. Both are “data we have.” Only one of them is usable without significant prep work.
I also ask about the systems we’ll need to connect to. Authentication, permissions, API rate limits, legacy formats. I keep a checklist of what we’ll need access to and start getting answers on day one, because access requests that come in week three have a way of pushing everything back by two weeks.
Constraints and risks (15 min). I ask what would make this project harder than expected. Budget ceiling, timeline pressure, a regulatory constraint I don’t know about, an internal team that needs to approve the output before it touches users. These things come out in this section if I ask directly.
This is also when I surface my own constraints: what we don’t know yet, what we’ll need to prototype before committing to a timeline, what decisions we’ll need the client to make during the build rather than upfront.
I don’t treat this as a risk log exercise. It’s more of a mutual “what might bite us” conversation. The tone matters. If it feels like I’m trying to add caveats to reduce my liability, the client will hold back. If it feels like we’re both trying to find the surprises early, the conversation is more honest.
Communications and ownership (15 min). Who makes decisions on the client side? Who do I contact if something is blocked? Who reviews work at sprint demos? These questions have obvious answers, but I’ve been surprised by how often the person who signed the contract isn’t the person I’ll be working with daily.
I also set expectations for the engagement pattern here: weekly demos every Thursday, async updates via Slack between demos, sprint retrospective every two weeks. The structure I use for weekly demos is something I’ve described before. The kickoff is when I explain it so there are no surprises.
What I Do When the Project Changes in the Room
It happens. The problem they described in the intro call is not quite the problem that comes out in the kickoff. Sometimes it’s a sharper version of the same idea. Sometimes it’s genuinely different.
My rule: if the core problem statement changes, we update the Sprint 0 document before leaving. I’d rather spend twenty minutes rewriting the problem definition in the meeting than discover six weeks in that we built a solution to a problem the client no longer has.
If the project changes significantly enough that we need more time before we can scope it, I say so. I’d rather add a Sprint 0 week than start sprinting in the wrong direction.
The Sprint 0 Document I Send Within 48 Hours
After the kickoff I write a Sprint 0 document. It’s not long. It covers:
The problem, in one paragraph. If I can’t write the problem in one paragraph, we didn’t clarify it enough in the kickoff. This paragraph is the thing I’ll return to every two weeks to check if we’re still building what we said we’d build.
Success criteria, with specific metrics. The accuracy threshold, the user metric, the post-launch measurement plan. Locked in writing so the client and engineering team are working toward the same bar.
Data and integration inventory. What we have, what we still need access to, and who owns each dependency.
What we’re not building. This is as important as what we are building. Scope creep usually enters through things that were never explicitly excluded. I’ve written about how I handle scope once a project is underway, but excluding things in writing at Sprint 0 prevents most of it from starting.
Communications. Who’s on Slack, who’s in the weekly demo, who has sign-off authority on deliverables.
I send it to the client within 48 hours and ask for sign-off before we start Sprint 1. If something is wrong, better to surface it now. The sign-off isn’t a legal formality. It’s a forcing function for the client to actually read what we’ve agreed to.
A Note on What AI Projects Need That Regular Software Projects Don’t
A kickoff for a standard software feature covers roughly the same ground: what are we building, for whom, and by when. Atlassian’s project kickoff guide is a good general reference for that baseline. An AI project kickoff needs two additional things on top of it.
First, it needs a data conversation that goes much deeper than normal. Software features don’t usually depend on whether you have enough historical data in the right format. AI features often do. I try to understand the data situation before we agree to a timeline. If we don’t know what we have, Sprint 0 is partly about finding out.
Second, it needs an honest conversation about accuracy expectations, and what happens at the edges. Every AI model has failure cases. The kickoff is when I ask: “What’s your plan if the model is wrong about this?” Not as a way to lower the bar, but because edge cases that aren’t discussed in week one become crises in week four.
The clients who’ve had the smoothest builds with us are usually the ones who came into the kickoff thinking about those two things. Sometimes that’s because they’ve done this before. Sometimes it’s because of the pre-read I sent 24 hours earlier.
FAQ
How long should an AI project kickoff be?
90 minutes is the right length for most projects. Shorter and you don’t get through data and integrations with enough depth. Longer and attention drifts before you reach the communications section. If we need more than 90 minutes, it’s usually a signal that the problem hasn’t been defined clearly enough before the kickoff, not that we need a longer meeting.
Who should be in the room for a kickoff?
On the client side: the person who owns the project decision (usually the founder or product lead), the person who will work with the AI output daily (an operations manager, a sales rep, whoever the end user is), and if it’s technically complex, a technical person who knows the data and integrations. On our side: me, and the lead engineer for the build. More people means fewer direct conversations and more people waiting to speak.
What if we discover during the kickoff that the data we need doesn’t exist?
Then Sprint 0 is about figuring out if we can get it, generate it synthetically, or change the problem. We don’t start Sprint 1 while that question is open. I’ve seen projects where the data problem was discovered in week three. The client had to pause engineering, wait four weeks for data collection, and restart. That conversation at the kickoff costs one week. At week three it costs six.
What’s the difference between a kickoff and a discovery call?
A discovery call is what happens before a client commits to the project. It’s typically 30-45 minutes, focused on whether we’re a fit and whether the problem is real. A kickoff happens after commitment, at the start of engineering. It’s where we go from “yes, we’re doing this” to “here’s exactly what we’re doing and how.” They’re different conversations, and mixing them up wastes both.
What do you do when the client wants to start building before the Sprint 0 document is signed?
I push back, politely but directly. The Sprint 0 document takes 48 hours to write and review. Starting without it means starting without agreed success criteria, which means any outcome can be argued to be a success or a failure. I tell clients: we can start a day later and know what we’re building, or start now and find out what we were building after we’re done. Every client I’ve said this to has waited the two days.
If you’re starting an AI project and want to make sure the first three weeks go well, we can run the discovery call and kickoff together. Book a 30-minute call and we’ll tell you what we’d need to know before Sprint 1 starts.