Insights
· 10 min read

Day 1 Discovery: 5 Questions Before I Quote an AI Project

The five questions our PM asks every founder before quoting an AI project. How each one shapes scope, timeline, and a number you can stand behind.

Dharini S
Dharini S
People and process before product — turning founder visions into shipped tech
Share
Day 1 Discovery: 5 Questions Before I Quote an AI Project
TL;DR
  • Most AI projects that exceed their budget weren't under-budgeted. They were under-scoped before the quote was written.
  • Asking for the budget range upfront isn't impolite. It's the fastest way to scope the right version of a project.
  • The data situation question is where most AI project timelines reveal their real complexity. Ask for a sample before estimating anything.
  • If the success metric is 'it feels right,' add 30% to the estimate. The last 30% of ambiguous projects is always spent figuring out what the goal actually was.
  • The quote that doesn't get negotiated came from a call where all five questions got clear answers.

A founder called me three weeks ago. Before I’d finished saying hello, he asked: “How much does an AI feature cost? Just a rough number.”

I said: “Somewhere between $5,000 and $50,000. Which is why I can’t give you a better number without a call.”

We scheduled 30 minutes the next morning. By the end, he knew roughly what his project would cost, why it sat in that range, and what he’d need to prepare before we could commit to a number. The proposal came two days later. He signed without revision.

That call worked because of five questions I’ve refined over two years of scoping AI development services for founders. Not a checklist I read off a screen. A conversation I steer toward the information that makes quoting possible.

Why Quoting Without Discovery Usually Fails

Most AI projects that exceed their budget weren’t under-budgeted to begin with. They were under-scoped.

A founder gets quoted $12,000 for an invoice-processing AI. Reasonable. But the quote assumed clean, structured data. The actual data lives in three systems, two of which haven’t been updated since 2022. Now there’s six weeks of data engineering that wasn’t in scope. The build cost didn’t change; the scope did.

McKinsey’s State of AI research consistently identifies poor data quality and incomplete requirements as the top reasons AI projects run late or deliver less than expected. Not because AI is hard to scope. Because the right conversation didn’t happen before someone wrote a number.

These five questions close most of that gap. They’re different from the scoping questions I ask once engineering is about to start. These are earlier: the commercial questions that let me quote a project I can actually deliver.

Question 1: Walk Me Through the Current Process, Step by Step

Before I know what to build, I need to know what’s actually happening today.

“We want an AI to process our invoices” tells me almost nothing. “We have two people who spend six hours a day reviewing invoices in a shared spreadsheet, and they miss about 12% of exceptions because the format varies by vendor” tells me the scope, the data shape, the ROI case, and the success baseline. Those are different conversations.

The question I literally ask: “Walk me through what happens today, step by step.” Then I stop talking.

The answer tells me whether there’s a manual process to automate (faster to scope, clearer ROI) or something net-new with no existing workflow to reference (riskier to price). It tells me where the failure modes are. It tells me whether the business case is obvious or needs constructing.

When a founder says “it’s complicated, I’ll send you a brief first,” I ask again in the call. Briefs written before conversation almost always describe an ideal process, not the one that actually runs. I’ve scoped from briefs twice in my career. Both times the estimate had to be redone once I saw the real thing.

Question 2: What’s Your Budget Range?

This is the question most PMs skip in a first call because it feels impolite. I ask it in the first ten minutes.

Not because I’ll pad the quote to match whatever number they give. Because a $5,000 project and a $50,000 project scope differently in architecture, timeline, and what corners we can cut. If I build a detailed proposal for a $40,000 build and the founder’s range is $12,000, we’ve both wasted three days.

I frame it directly: “I’ll give you an honest number regardless. But if your current ceiling is $8,000 and the real project is closer to $35,000, I’d rather know now so we can figure out which version actually fits your constraints.”

For context on where AI development sits in 2026: small, fixed-scope builds (2-4 weeks) run $5,000-$8,000. Medium projects with integrations and custom logic (4-10 weeks) run $15,000-$25,000. Larger builds with custom training, complex pipelines, or production hardening run $30,000-$50,000 and take 3-6 months. Sharing this range early gives founders a framework and helps them self-select into the right conversation.

Founders who say “I’ll know the budget once I see the scope” usually mean they haven’t secured budget yet. That’s important to surface before engineering hours go into a proposal for an unfunded project.

Question 3: What Does Your Data Situation Look Like?

Every AI product runs on data. The state of that data drives cost as much as the build itself.

A founder with three years of customer calls in S3, transcribed, labeled, and in consistent JSON format is in a completely different position than a founder with three years of calls on a local server, no transcription, and no metadata. Both have data. One is ready to build on. The other has six to eight weeks of pre-processing before the AI work starts.

I ask: “What data would this AI need to work with, and where does it live?”

If the answer is vague, I ask for a sample before I estimate anything. Not a full export. Twenty to fifty files from the actual dataset. Google’s Rules of Machine Learning puts it plainly: data pipelines are usually harder than the model itself. In my experience, a sample tells me more in twenty minutes than an hour of description. I’ve had projects quoted at four weeks based on “we have clean structured data” turn into ten weeks after I opened thirty actual files.

Asking for the sample before writing a number is the single most reliable protection against discovering the real scope mid-sprint.

Question 4: What Does “Working” Look Like 90 Days From Now?

This question reveals whether the scope is actually clear.

“Working” means something different to every founder. One means “live in production, handling real requests.” Another means “demo-ready for our next board meeting.” A third means “my head of sales stops complaining about this.” These are different timelines, quality bars, and costs.

The question I ask: “If we deliver this in 90 days and you call it a success, what specifically has happened?” Then I push for a behavior or a number, not a feeling. “It feels faster” doesn’t drive a sprint. “Customer support tickets about this process drop by 40%” does.

A founder who answers this quickly usually has a well-formed project. One who takes time to think often hasn’t defined success yet, which means we need to do that together before we scope anything.

When I leave the call without a clear success metric, I add 30% to the estimate. Not as padding. As the cost of the conversation that’ll happen in week 6 about what the project was really supposed to accomplish. I’d rather build that time in honestly than absorb it silently.

Question 5: Who Else Needs to Say Yes Before This Moves?

This question is about the commercial timeline, not the technical one.

If the founder is the decision-maker and has budget authority, a proposal can turn into a signed agreement in a week. If they need board approval, sign-off from a CTO who hasn’t been in this conversation, or a procurement process, that’s four to eight weeks before engineering starts. Both are fine. I need to know which one I’m in before I invest in a detailed proposal.

I ask: “Is there anyone else who should be part of this decision? Someone technical, or someone who’ll actually use what we build?”

The answers sometimes reshape the scope entirely. The person who’ll use the tool daily often has a different sense of “working” than the founder who commissioned it. When there’s a technical decision-maker who hasn’t been in the conversation, I ask for a second call with them before the proposal. Not to resell the project. To make sure the technical assumptions in the quote reflect what their environment actually looks like.

The 5 questions before we write code go deeper on the project side. These quoting questions happen before that, at the point where I’m deciding whether we can commit to a number and what that number should be.

What Comes After the Five Questions

If all five get clear answers in one call, I write a one-page brief within 24 hours. Not a proposal. A document that says: here’s what I understand about the problem, here’s the version I think fits your timeline and budget, and here are the two or three things I still need to validate before I give you a final number.

The proposal follows the brief. When the brief is right, the proposal rarely gets disputed.

Not every call resolves cleanly. Sometimes the data question opens a six-email thread. Sometimes the budget question surfaces a three-month commercial process I wasn’t expecting. That’s fine. Better to know in the first call than three weeks into engineering.

The founders who can answer all five clearly have thought through their projects enough that we can move fast. The ones who work through the questions during the call get there too, and they arrive with more clarity than they had before. Either way, the call earns its time.


If you’re at the point of figuring out scope and budget for an AI project, book a 30-minute call. We’ll run through our version of this and you’ll leave knowing roughly what it costs and why.

FAQ

How long does an AI discovery call usually take?

Thirty minutes is standard for straightforward projects. If the data situation is complex or multiple stakeholders are involved, plan for 45 to 60 minutes. The goal isn’t to finish in 30 minutes; it’s to get clear answers to the questions that make a quote possible. Some calls end in 22 minutes. A few run longer. We don’t rush the data conversation.

What should I prepare before a discovery call with an AI development company?

The most useful things to bring: a one-paragraph description of the current process and where it breaks, a rough sense of your budget range, and access to a small sample of the data the AI would work with (20 to 50 files is enough). You don’t need a requirements document. You need enough clarity to answer “what does working look like?” with something measurable.

What’s the typical cost for an AI development project in 2026?

Small, fixed-scope builds run $5,000-$8,000 and take 2-4 weeks. Medium projects with custom logic and integrations run $15,000-$25,000 over 4-10 weeks. Larger builds involving custom training, complex data pipelines, or production hardening run $30,000-$50,000 and up, with 3-6 month timelines. The range is wide because scope varies more in AI than in conventional software. The discovery call is what narrows it.

How long does it take to receive a proposal after the discovery call?

If the five questions get clear answers in one call, we send a one-page brief within 24 hours and a full proposal within 72 hours. If we need a data sample or a follow-up with a technical stakeholder, it typically adds three to five business days. We don’t write proposals based on assumptions we haven’t validated.

What if I don’t know the answer to one of the discovery questions?

That’s more common than founders expect. If you’re not sure what “working” looks like, we’ll define it together. If you don’t know your data format, we’ll ask for a sample to look at. The discovery call is a conversation, not an intake form. The goal is shared clarity before we commit to a number on either side.

#ai development services#discovery call#ai project scoping#ai development company#project management#quoting ai projects
Share

Tuesday Build Notes · 3-min read

One engineering tradeoff, every Tuesday.

From the engineers actually shipping. What we tried, what broke, what we'd do differently. Zero "5 AI trends to watch." Unsubscribe in one click.

Issue #1 lands the moment you subscribe: how we cut a client's LLM bill 60% without losing quality. The 3 model-routing rules we now use on every project.

Dharini S

Written by

Dharini S

People and process before product — turning founder visions into shipped tech

Dharini sits between the founder's vision and the engineering team, making sure things move in the right direction — whether that's a full-stack product, an LLM integration, or an agent-based solution. Her background in instructional design and program management means she thinks about people first — how they process information, where they get stuck, what they actually need — before jumping to solutions.

You read the whole thing. That means you're serious about building with AI. Most people skim. You didn't. Let's talk about what you're building.

KL

Kalvium Labs

AI products for startups

You've read the thinking.
The only thing left is a conversation.

Tell us your idea. We tell you honestly: can we prototype it in 72 hours, what would it cost, and is it worth building at all. No pitch. No deck.

Chat on WhatsApp

Usually reply within hours, max 12.

Prefer a scheduled call? Book 30 min →

Not ready to message? Describe your idea and get a free product spec first →

What happens on the call:

1

You describe your AI product idea

5 min: vision, users, constraints

2

We ask the hard questions

10 min: what happens when the AI gets it wrong

3

We sketch a 72-hour prototype

10 min: architecture, scope, stack, cost

4

You decide if it's worth pursuing

If AI isn't the answer, we'll say so.

Chat with us