From Idea to MVP in 4 Weeks: Our Process
We've shipped 50+ MVPs. The ones that succeed and the ones that fail have almost nothing to do with code quality. They have everything to do with scope decisions made in the first three days.
Here's the exact four-week process we run, including the hard conversations we have with founders and the technical shortcuts we take on purpose.
Week 0: Before We Write Code
The most important week doesn't count toward the four. This is where we prevent the #1 MVP killer: building the wrong thing.
The Scope Conversation
Every founder arrives with a feature list. It's always three times larger than what fits in four weeks. Our job is to find the smallest version that can validate the core hypothesis.
We use a simple framework:
Must Have (Week 1-2): What's the one thing this product does that nothing else does? This is the MVP. Everything else is noise.
Should Have (Week 3): What makes the must-have feature usable in production? Auth, basic settings, error handling.
Won't Have (Post-MVP): Everything else. Yes, including that analytics dashboard, the admin panel, the email notification system, and the Slack integration.
The rule: If removing a feature doesn't invalidate the core hypothesis, it's a "Won't Have."
Technical Stack Decision
We don't deliberate on the stack. For 90% of MVPs, we use:
- Next.js — Full-stack framework, deploys to Vercel in minutes
- Supabase — Auth, database, storage, real-time — all managed
- Tailwind CSS — Ship UI fast without design system debates
- Vercel — Zero-config deployment with preview URLs for every PR
We deviate only when the product demands it (ML pipeline → Python backend, real-time multiplayer → custom WebSocket server, hardware integration → React Native).
Why not [other framework]? Because the stack doesn't matter for validation. What matters is speed to production and the team's familiarity. We ship Next.js MVPs faster than anything else.
Week 1: Foundation Sprint
Day 1-2: Skeleton
In two days, we ship a deployable skeleton:
- Supabase project created with auth configured
- Next.js app with routing structure matching the product's information architecture
- Basic layout: header, responsive shell, mobile navigation
- Deployment pipeline: push to main → Vercel deploys → preview URL shared with founder
The founder has a live URL they can share by end of day 2. It doesn't do anything yet, but the psychological impact is real.
Day 3-5: Core Feature
The entire week focuses on the single core feature. Not the UI polish. Not the edge cases. The happy path.
Example: For a scheduling SaaS, the core feature is: "User creates an availability window and shares a booking link. Guest books a time slot." That's it. No reminders, no calendar sync, no team scheduling.
We build the database schema, API routes, and UI for this one flow. By Friday, the founder can demo it.
Design Approach
We don't use Figma for MVPs. We design in code using Tailwind utility classes and a consistent component library we've built over time. The founder sees real, interactive UI from day one — not static mockups that need "developer handoff."
If the product needs a unique brand identity (consumer-facing, competitive market), we allocate 2-3 hours on day 1 for color palette, typography, and logo placement. That's it.
Week 2: Complete the Loop
Day 6-8: Supporting Features
The core feature from week 1 gets the supporting pieces that make it a product:
- Auth flows — Sign up, sign in, password reset, email verification
- User settings — Profile, preferences, account deletion
- Error states — Empty states, loading states, error boundaries
- Mobile responsiveness — Every screen works on 375px width
Day 9-10: Data Integrity
This is where most MVPs cut corners and pay for it later. We invest two days in:
- Row Level Security — Every Supabase table gets RLS policies. No user can see another user's data.
- Input validation — Server-side validation on all API routes. Client-side validation for UX.
- Rate limiting — Basic protection against abuse.
Why not skip security for MVP? Because a data leak kills trust permanently. An MVP with a security vulnerability is worse than no MVP.
Week 3: Polish Sprint
Day 11-13: UX Polish
The product works. Now we make it feel good:
- Loading states — Skeleton loaders, optimistic updates, no blank flashes
- Transitions — Page transitions, modal animations, toast notifications
- Copy and microcopy — Button labels, error messages, empty state text, onboarding hints
- Accessibility — Keyboard navigation, screen reader labels, color contrast
Day 14-15: Feedback Integration
By now, the founder has been using the preview URL for two weeks. They have feedback. We allocate two days for feedback-driven changes.
What we accept: UX flow changes, copy rewrites, layout adjustments, bug fixes.
What we push back on: New features, scope expansion, "nice-to-have" additions. These go on the post-MVP backlog.
Week 4: Launch Sprint
Day 16-17: Production Hardening
- Custom domain — DNS configuration, SSL certificate
- SEO basics — Meta tags, Open Graph images, sitemap, robots.txt
- Analytics — PostHog or similar for user behavior tracking
- Error monitoring — Sentry for crash reporting
- Database backups — Supabase handles this, but we verify the schedule
Day 18-19: Testing
We don't write comprehensive unit tests for MVPs. We run:
- Playwright E2E tests for the core user flow (3-5 tests)
- Manual testing on Chrome, Safari, Firefox, and mobile
- Performance check — Lighthouse audit targeting 80+ scores
Day 20: Launch
- Deploy final version to production
- Monitor error rates for 4 hours
- Hand off documentation: architecture overview, deployment guide, environment variables, database schema
- Schedule post-launch retrospective for week 5
The Tradeoffs We Make
Every four-week MVP involves deliberate compromises. Here's what we skip and why:
| We Skip | Why | When to Add It | | ------------------------ | ------------------------------------------- | ---------------------------------- | | Comprehensive test suite | ROI is low before product-market fit | After PMF, before scaling | | CI/CD pipeline | GitHub → Vercel auto-deploy is sufficient | When team grows past 3 | | Microservices | Monolith is faster to ship and debug | When hitting scaling limits | | Custom design system | Tailwind components are good enough | When brand differentiation matters | | Internationalization | English-only until proven demand | When international users appear | | Admin panel | Founders manage data via Supabase dashboard | When hiring non-technical ops |
What Happens After the 4 Weeks
The MVP is live. Now what?
Week 5-6: Measure. Is anyone using it? Are they completing the core flow? Where do they drop off? PostHog data answers these questions.
Week 7-8: Iterate. Based on real usage data, prioritize the next features. This is where the "Should Have" list from Week 0 gets revisited.
Month 2-3: Scale or pivot. If the core hypothesis is validated, invest in the features that drive growth. If not, pivot the product (not the code — the architecture supports iteration).
Why This Works
The four-week constraint forces three behaviors that lead to better products:
- Ruthless prioritization — When you can't build everything, you build the right thing
- Real user feedback early — A live URL in week 1 means two weeks of founder testing before launch
- Technical discipline — No time for over-engineering means simpler, more maintainable code
The MVP isn't the product. It's the experiment that tells you what the product should be.
Austin Coders
We build SaaS & AI apps that actually scale. React, Next.js, and AI-powered solutions for startups and enterprises.