Every project at Kalvium Labs starts with a call, not a proposal. There’s a reason for that. Proposals written without context are guesswork dressed up in formatting. I’ve seen enough AI builds go sideways to know that the first 30 minutes of conversation tell me more about a project’s odds than any requirements document.
Over the past year managing AI development projects, I’ve landed on five questions I ask every client before engineering starts. Not a checklist I read off a screen. A conversation I steer toward the things that actually determine whether a project will work.
These questions aren’t about technology. They’re about clarity. A founder who can answer all five has thought deeply enough that we can move fast. If they struggle with questions two or three, that’s fine — the conversation itself helps them get there.
Question 1: What problem are you solving for your users?
Not “what do you want to build?” That question comes later. This one comes first because it changes what “later” looks like.
Founders often start by describing a product: “I want an AI chatbot that answers questions about our documentation.” That’s a feature description, not a problem statement. The problem might be: “Our support team spends 14 hours a week answering the same 30 questions, and response times are averaging 6 hours.” Those are different starting points. The first one leads to a chatbot. The second one might lead to a chatbot, or it might lead to a better FAQ page, an automated email responder, or a self-service knowledge base with AI-assisted search.
I’m not trying to talk anyone out of what they want. I’m trying to understand what their users actually need, because that’s what determines whether the thing we build gets used.
When a founder answers this question well, it sounds like: “Our sales reps spend 40 minutes after every call manually filling out compliance forms. They skip fields, miss details, and our compliance team catches errors two weeks later. We need something that listens to the call and fills the form automatically.”
That answer gives me a problem (manual compliance documentation), a user (sales reps), a pain metric (40 minutes per call), and a failure mode (errors caught too late). I can scope a 72-hour prototype from that description alone.
When a founder answers with “I want to use AI to make our process more efficient,” I know we need 15 more minutes on this question before moving to the next one.
Question 2: What does your data actually look like right now?
This is where most projects reveal their real complexity. Not in the AI model selection, not in the architecture, but in the data.
I’ve learned to ask this question with the word “actually” in it because founders describe their ideal data state, not their current one. “We have a clean database of all our customer interactions” might mean a well-structured Postgres table with consistent schemas. It might also mean 18 months of Slack messages exported as JSON, three Google Sheets maintained by different people, and a CRM that hasn’t been updated since October.
Both are workable. But the engineering plan for each is completely different.
What I’m listening for:
Format and location. Is the data in a database, an API, files on a drive, or someone’s laptop? Is it structured (rows and columns) or unstructured (documents, emails, transcripts)?
Volume. 500 documents is a different problem than 50,000. For RAG-based systems, the chunking and indexing strategy changes significantly depending on corpus size.
Quality. Are there duplicates, inconsistent formatting, missing fields? Poor data quality is one of the most common reasons AI projects take longer than expected. Cleaning isn’t glamorous, but it’s often the difference between a prototype that impresses and one that confuses.
Access. Can we get to it today, or does someone need to grant permissions, sign a data processing agreement, or export it from a system that doesn’t have an API?
When a client says “our data is pretty well-organized,” I’ve learned to follow up with: “Can you show me a sample? Even a screenshot of five rows?” That single follow-up has saved me from at least a dozen misunderstandings.
The honest version of this question is: I need to know whether data preparation is a two-hour task or a two-week task, because that changes the project timeline, the cost, and what we can realistically show in the first prototype.
Question 3: What does “working” mean to you?
This might be the most important question of the five. It sounds simple. It rarely gets a simple answer the first time.
“Working” for an AI product isn’t binary the way it is for traditional software. A login page either works or it doesn’t. An AI-powered document search might return the right answer 78% of the time, return a partially correct answer 15% of the time, and miss completely 7% of the time. Is that “working”?
For some clients, yes. For others, anything below 95% accuracy makes the product useless in their context. Both are valid answers, and they lead to radically different engineering decisions, timelines, and budgets.
What I’m trying to establish:
The success scenario. “If the prototype does X, you’d be confident enough to move to a full build.” What is X, specifically? A concrete description: “A user uploads a 20-page contract, asks ‘what are the termination clauses?’, and gets the right answer with page references in under 5 seconds.”
The failure threshold. At what point does the output become unacceptable? If the AI gets 1 out of 10 answers wrong, is that fine? 3 out of 10? This matters because it determines whether we’re building a system that assists humans (where occasional errors are caught and corrected) or one that replaces a manual process entirely (where errors have consequences). Anthropic’s guide to model evaluations is a useful starting point for thinking through this systematically.
The user’s technical comfort. Will the end user be a developer comfortable with rough interfaces, or a non-technical employee who needs something polished and intuitive? The answer affects how much of the prototype budget goes to AI versus UI.
I’ve had clients tell me “working means it’s accurate” and then, after 10 minutes of this conversation, land on something far more specific: “Working means our compliance team can review AI-generated reports instead of writing them from scratch, and they catch fewer than 2 errors per 100 reports.” That’s a testable definition. Everything we build points toward proving or disproving it.
When I describe this process to other PMs, some ask why I don’t just send a requirements form. Because forms get filled out quickly with surface-level answers. Conversations surface the details that determine whether the project succeeds. The first 48 hours of every build depend on the quality of this conversation.
Question 4: Who on your side will make decisions during the build?
AI development projects move fast. Ours start with a prototype in 72 hours. That speed only works if decisions don’t get stuck waiting for someone who isn’t in the room.
I ask this directly because I’ve managed projects where the person on the call wasn’t the person approving deliverables. By sprint two, scope drifts because the new decision-maker has context the original conversation didn’t include. This isn’t a criticism of delegation — it’s a logistics question. I need to know:
Who reviews the prototype and says “yes, this is what I meant” or “no, this needs to change”? If that person isn’t on this call, I need to talk to them before we start.
Who approves scope changes? When the AI reveals something unexpected (and it always does), someone needs to decide whether we adjust the scope or stick to the original plan. That decision can’t wait three days for a board meeting.
What’s the communication preference? Some founders want daily Slack updates. Others prefer a sprint review summary at the end of each week and nothing in between unless something is blocked. Both work. Mismatched expectations don’t.
One pattern I’ve noticed: when a founder says “I’ll be the decision-maker but loop in my CTO for technical reviews,” the project usually runs smoothly. When a founder says “we’ll figure out the review process as we go,” I push back gently — not because I need a formal process, but because I need a name and a response time.
Every AI development engagement runs on two currencies: engineering hours and decision speed. I can control the first one. The second one depends entirely on the client’s side.
Question 5: What happens if the prototype says this won’t work?
This is the question founders least expect, and the one they tell me afterward they most appreciated.
We build prototypes in 72 hours specifically so founders can see their idea working before committing to a full build. But “working” includes the possibility that the answer is: “This approach doesn’t solve your problem the way you expected.”
The prototype might show that the AI handles 60% of the use case but struggles with the rest because the data isn’t structured for it, or that the accuracy threshold the client needs requires a budget that exceeds what they’d planned. None of these are failures — they’re information. Cheap at 72 hours. Expensive at month three.
What I’m asking with this question is: “Are you prepared for the prototype to change your plan?” Because if the answer is “no, we’re committed to this exact approach regardless,” then prototyping isn’t the right first step. A fixed-scope build might be. And I’d rather know that now than discover it when I’m presenting prototype results that suggest a pivot.
The best answer I’ve heard to this question came from a founder building an AI tool for his sales team. He said: “If it doesn’t work, I want to understand why so I can decide whether to change the approach or change the product. Either way, I’ll know something I don’t know today.” We built his prototype. It worked. He came back for a second project.
Why These Five and Not Others
I’ve tried longer lists — ten questions, full intake forms. They don’t work better. Founders lose focus after 30 minutes, and most detail gets answered naturally once engineering starts.
These five questions cover the dimensions that matter before code:
- Problem clarity (do we know what we’re solving?)
- Data readiness (can we actually build this?)
- Success criteria (how will we know it works?)
- Decision structure (can the project move at the right speed?)
- Risk tolerance (what happens when reality doesn’t match the plan?)
Everything else — the tech stack, the model selection, the architecture — follows from the answers to these five. The technical conversation in the first 48 hours builds directly on what I learn here.
I’ve run discovery calls where a founder answered all five questions in 20 minutes and we had a prototype brief ready by the end of the call. I’ve also run calls where question two took 25 minutes because the data situation was genuinely complex, and that was exactly the right use of time.
The questions don’t change. What changes is how deep each one goes, depending on the project.
What Happens After the Call
Within 4 hours, I write a brief: answers to all five questions, open risks, and a proposed prototype scope. That goes to the engineering team — Anil Gulecha, our CTO (ex-HackerRank, ex-Google), reviews anything technically uncertain. The client gets a simplified version: here’s what we understood, here’s what we’d prototype, here’s the timeline. If anything doesn’t match, we catch it now. From there, the 72-hour prototype process begins, with the brief as the north star for everything the team builds.
People and process before product. That’s not a phrase I use for effect. It’s the reason these five questions happen before a single line of code gets written.
FAQ
How do I evaluate an AI development company before signing a contract?
Start with the discovery call. A good AI development company will ask about your problem, your data, and your success criteria before proposing a solution. If someone sends you a proposal without understanding your data situation, that’s a signal. Also look for prototype-first approaches: seeing your idea working before committing to a full build reduces risk significantly. Check whether projects are supervised by experienced technical leadership, not just staffed with engineers.
What should I prepare before a discovery call with an AI development company?
Three things help the most. First, a clear description of the problem you’re solving (not just the product you want). Second, an honest assessment of your data: what you have, where it lives, and what shape it’s in. Third, knowing who on your team will make decisions during the build. You don’t need a technical requirements document. The discovery call exists to create that together.
What if I don’t have clean data ready for an AI project?
That’s more common than most founders expect. If your data needs cleaning, reformatting, or consolidation, we factor that into the project plan. For prototypes, we can work with a representative sample or synthetic data to prove the AI approach works, then integrate real data in the full build. The important thing is being honest about the data situation during the discovery call so the timeline reflects reality.
Want to see what these five questions would look like for your specific project? Book a 30-minute call. I’ll walk you through the discovery process and tell you honestly whether we can prototype it in 72 hours.