The Pattern I See Every Week
A founder calls us with an AI product idea. They’ve been thinking about it for months. They have a 20-page PRD, a competitive analysis, three architecture diagrams, and a spreadsheet comparing LLM pricing.
They haven’t built anything.
I’ve managed enough projects to know where this leads. More planning. More research. More “what if we also add…” conversations. And six months later, they’ve spent time and money on everything except the one thing that matters: seeing the idea actually work.
This is the problem the Prototype First methodology solves. Instead of planning for months, we build a working version in 72 hours. Not a mockup. Not wireframes. Working code that runs, models that respond, an interface you can interact with.
You see it running. Then you decide.
What a 72-Hour Prototype Actually Is (and Isn’t)
Let me be precise, because this is where expectations go sideways.
It IS:
- Working code. A real application running on a staging environment. You can open it in a browser, interact with it, break it.
- Real AI. Actual LLM calls, actual RAG pipelines, actual data flowing through the system. Not hardcoded responses.
- Enough to answer the core question. “Does this approach work for this use case?” If yes, we scope the full build. If no, you’ve spent 72 hours learning that, not 6 months. Y Combinator frames the MVP the same way: the goal is learning, not shipping.
It ISN’T:
- Production-ready. No authentication, no error handling for every edge case, no load testing. That comes in the full build.
- Feature-complete. A prototype proves the core hypothesis. It doesn’t have user management, admin dashboards, or the 15 features in your PRD that are actually Phase 2.
- A design showcase. The UI is functional, not polished. If we’re spending time on pixel-perfect design in a prototype, we’re solving the wrong problem.
If the distinction between a prototype, a POC, and an MVP still feels fuzzy, we’ve mapped out exactly how they differ and when to use each.
The Scoping Conversation: This Is Where It Really Happens
The 72 hours of building is the visible part. The invisible part (and honestly the harder part) is the scoping conversation that happens before we write a single line of code.
Here’s what I walk through with every founder:
Question 1: “What’s the one thing this prototype needs to prove?”
Not three things. Not five. One.
If you’re building an AI customer support chatbot, the question isn’t “can we handle 50 different query types?” The question is: “can the AI accurately answer questions using our knowledge base?”
If you’re building an AI analytics tool, the question isn’t “can we generate beautiful charts?” The question is: “can a non-technical user ask a plain-English question and get a correct, data-backed answer?”
Everything in the prototype serves that one question. Everything else is noise. If you’re still at the stage of deciding which problem to prototype first — which AI product direction to pursue at all — the most useful exercise is to write down, in one sentence, what the prototype needs to prove. If you can’t do that yet, you’re not ready to scope.
Question 2: “What data do we need, and do we have it?”
This is the question that kills timelines. A founder says “we want AI that analyzes our customer data” and then we discover the data is in 14 different spreadsheets, two legacy databases, and someone’s email inbox.
For a 72-hour prototype, we need data available on Day 1. If it’s not, we use a representative sample or synthetic data. The prototype proves the AI approach works. We can connect to the real data in the full build.
I’ve learned to ask this question in the very first conversation, not on Day 1 of the build. Finding out you don’t have the data on the day you’re supposed to start building is how 72-hour prototypes become 3-week prototypes.
Question 3: “What happens if the AI is wrong?”
This is the question founders don’t expect. And it’s the most important one.
If the AI gives a wrong answer in a customer support chatbot, is that a minor annoyance or a compliance violation? If the analytics tool returns an incorrect number, does someone make a bad business decision based on it?
The answer shapes the entire architecture. High-stakes domains need guardrails, confidence thresholds, and human-in-the-loop fallbacks, even in a prototype. Low-stakes domains can tolerate more AI autonomy.
Question 4: “Who is going to use this, and how?”
Not the abstract “target user.” The actual person who will sit down with this prototype and try it.
Is it the founder themselves? A team member? An end customer? This changes what we build. A prototype for a technical founder can have a minimal UI and terminal output. A prototype for a non-technical end user needs a clean interface they can navigate without instructions.
What 72 Hours Actually Looks Like
Here’s the real timeline, from the projects I’ve managed:
Day 0: Scoping Call (30 minutes)
The founder describes the idea. I ask the four questions above. We agree on what the prototype will prove and what it won’t include.
By the end of this call, everyone knows:
- What the prototype demonstrates
- What tech stack we’re using
- What data we need (and whether we have it)
- What “done” looks like
Day 1: Foundation (8 hours)
The engineering team sets up the project. For a typical AI prototype, this means:
- Project scaffolding (Next.js or FastAPI, depending on the use case)
- Data pipeline: connecting to the data source, ingesting content, setting up the vector database if it’s a RAG use case
- First LLM integration: basic prompt, basic response, confirm the AI approach works at a fundamental level
By end of Day 1: the AI is responding to queries. It’s rough, but it works.
Day 2: Core Logic (8 hours)
This is where the engineering depth shows up:
- Prompt refinement: iterating on the system prompt, few-shot examples, output formatting
- Pipeline optimization: chunking strategy, retrieval tuning, response quality
- Basic UI: a functional interface the founder can interact with
By end of Day 2: the prototype is usable. You can interact with it and see the core value proposition working.
Day 3: Polish + Demo (4-6 hours)
- Edge case handling: what happens with empty inputs, unexpected queries, data gaps
- UI cleanup: not polished, but navigable
- Deploy to a staging environment: a URL the founder can access (we use Cloudflare Workers for zero-config, globally distributed deployments)
- Demo preparation: walkthrough of what works, what doesn’t, and what the full build would look like
By end of Day 3: the founder sees their idea running. Working code, real AI, real data.
The Conversation After the Prototype
This is the part that makes Prototype First different from just “building fast.”
After the demo, one of three things happens:
Outcome 1: “This is exactly what I wanted.” Great. We scope the full build. The prototype becomes the foundation, not throwaway code, but the starting point for production. We lock scope, timeline, and price, and move into the production build with architecture that’s already been validated in the real world.
Outcome 2: “This is interesting, but I want to change the direction.” Also great. The founder saw the idea working and realized the real opportunity is slightly different. Maybe the chatbot should focus on internal teams instead of customers. Maybe the analytics tool needs to emphasize a different metric. The prototype saved months of building in the wrong direction.
Outcome 3: “This doesn’t solve the problem I thought it would.” Still great. The founder learned in 72 hours what would have taken months and significantly more money to discover. No invoice, no guilt. The prototype did its job: it answered the question.
All three outcomes are wins. That’s the point.
Why Most AI Projects Skip This (and Suffer for It)
The traditional approach looks like this:
Discovery → Requirements → Proposal → SOW → Design → Build → Test → Launch
That’s 3-6 months before anyone sees anything working. And the dirty secret? Requirements written in Month 1 are almost always wrong by Month 3, because the founder learns things during the build that change what they need. Google’s Design Sprint methodology identified this pattern years ago — compress the learning cycle into days, not months. This delayed feedback loop is one of the core reasons AI products fail — the planning phase creates false confidence that doesn’t survive contact with real users.
The Prototype First approach:
Scoping Call → 72-Hour Prototype → Decision → Build (if yes)
The learning that normally happens in Month 3 happens in Day 3 instead. That’s the entire value proposition.
What’s Realistic in 72 Hours: Real Examples
To keep this concrete, here’s what 72-hour prototypes have looked like on projects I’ve managed:
AI Analytics Tool: A working chat interface where users type plain-English questions about financial data and get accurate, data-backed answers. Agent writes SQL, executes it, explains the results. Includes a reasoning panel showing how the AI arrived at the answer.
Assessment Platform: A student-facing portal with MCQ assessments, scoring, progress tracking, and certificate generation. Users could register, take assessments, and see their results.
Talent Management System: A functional portal where talent could create profiles and casting professionals could post opportunities and review applications. Core matching logic working.
Each of these started as a prototype that the client could interact with. Each became a full production system after the founder said “yes, let’s build this.”
The One Thing I’d Tell Every Founder
If you’re sitting on an AI product idea and you’re still in the “planning” phase after a month, stop planning.
The plan will change the moment you see the idea running. Every founder I’ve worked with has adjusted their vision after seeing the prototype. Not because the plan was bad, but because seeing something real gives you information that no amount of planning can produce.
The 72-hour prototype isn’t about speed for the sake of speed. It’s about learning faster. The sooner you see your idea working (or not working), the sooner you can make a real decision about what to build.
That’s what Prototype First means. Build it. See it. Then decide.
FAQ
How much does a 72-hour prototype cost?
A 72-hour prototype typically falls within our small fixed-bid range of $5,000 to $8,000. The exact figure depends on what the prototype needs to prove and whether the required data is already available on Day 1. If the prototype leads to a full build, that engagement is scoped and priced separately based on production requirements.
What happens after the prototype is complete?
After the demo, you have three paths: approve a full production build, adjust the direction based on what you saw, or walk away if the approach did not answer your core question. If you move forward, the prototype code becomes the foundation for the production system rather than work that gets discarded. We scope the full build based on what the prototype revealed, not on assumptions written before anyone saw the idea running.
What types of AI products can be built in 72 hours?
We have prototyped chat interfaces with RAG pipelines, AI analytics tools that answer plain-English questions against structured data, assessment platforms, and talent-matching systems, all within the 72-hour window. The constraint that shapes scope is not complexity but focus: if we can identify one core question the prototype must answer, we can build a working version fast. Products requiring large-scale data migrations or multi-system integrations on Day 1 are scoped as a longer engagement from the start.
Do I need technical knowledge to participate in the scoping process?
No. The scoping conversation is built around what the product needs to prove, not around architectural decisions. You describe the problem you are solving and who will use it; we translate that into what gets built. The output is a working interface you can interact with directly, so evaluating the prototype does not require reading code or reviewing technical documentation.
What if the prototype shows that the idea needs significant changes?
That is one of the three outcomes we plan for, and it is a good result. Seeing a working version almost always surfaces refinements the founder did not anticipate during the planning phase. We document what the prototype revealed, what would need to change, and what a revised build scope would look like. Finding a direction shift at the 72-hour mark costs far less than finding it three months into a full production build.
Have an AI product idea? Book a 30-minute call. We’ll tell you if we can prototype it in 72 hours, and if we can’t, we’ll tell you that too.