What Happens After Your MVP Launches (Nobody Talks About This)
You launched. Congratulations. The site's live, the first users are signing up, and you feel like you've climbed a mountain.
What Happens After Your MVP Launches (Nobody Talks About This)
You launched. Congratulations. The site's live, the first users are signing up, and you feel like you've climbed a mountain.
Bad news: you're at base camp. The actual climb starts now.
I've launched MVPs for marketplace founders, SaaS startups, and everything in between. The launch itself is a milestone, but it's what comes after that determines whether your product survives. And nobody talks about this part because it's not sexy. It's not a Twitter thread about shipping in 4 weeks. It's the messy, unglamorous reality of running software that real people use.
Week One: The Bug Reports
Your first real users will find bugs. Not the bugs you were worried about — those you tested. The weird edge cases you never considered.
"I signed up with a plus sign in my email and now I can't reset my password." "I uploaded a HEIC photo from my iPhone and it shows as a broken image." "I clicked the back button during checkout and got charged twice."
These aren't hypothetical. These are actual bugs I've dealt with in the first week after launching projects. Real users do things you'd never think to test because they're not following your happy path. They're using your product on a phone with patchy 3G, on a browser you've never tested, with accessibility settings you didn't account for.
This is why I insist on tests during development. Not because I enjoy writing them — because they're the safety net that catches the obvious stuff so you can focus on the genuinely surprising bugs that real usage surfaces.
The practical response: set up proper error tracking before you launch. Something like Sentry or Flare that captures exceptions in real-time with full context — what the user was doing, what browser they're on, what data was involved. Without this, you're debugging blind.
Week Two: The "Oh Shit, We Need Monitoring" Moment
At some point in the first two weeks, something will go wrong and you won't know about it until a user tells you. The database runs a slow query and the page takes 12 seconds to load. The email queue backs up and confirmation emails arrive 4 hours late. The server runs out of disk space because nobody set up log rotation.
This is the moment every founder discovers that deploying software and operating software are two completely different skills.
Here's the minimum monitoring you need from day one:
- Error tracking (Sentry, Flare, or similar) — know when things break
- Uptime monitoring (Oh Dear, UptimeRobot) — know when the site's down before your users tell you
- Application monitoring (Laravel Telescope in development, Horizon for queues) — know what's slow
- Log aggregation — be able to search logs when debugging production issues
I set all of this up before launch on every project. It's not exciting work and clients sometimes question why I'm spending time on "infrastructure" instead of features. Then the first production issue hits and they're very glad we can see exactly what happened.
Month One: User Feedback That Contradicts Everything
This is the big one. The thing that separates founders who succeed from those who don't.
Your users will tell you things you don't want to hear. The feature you spent three weeks building? Nobody uses it. The flow you thought was intuitive? Users are confused by step two. The pricing model you agonised over? People think it's too complicated.
This is not failure. This is the entire point of launching an MVP. You launched to learn, and now you're learning. The question is whether you can hear the feedback without getting defensive.
I've watched founders react to user feedback in two ways:
The bad way: "Users just don't understand it. We need a tutorial. We need an onboarding flow. We need tooltips explaining everything." This is the founder protecting their vision instead of adapting to reality.
The good way: "Interesting — three people said the same thing. Let's watch someone actually use it and see where they get stuck." This is the founder using the MVP as a research tool, which is what it's supposed to be.
The practical response: talk to your users. Not through surveys — actual conversations. Watch them use your product over a screen share. The gap between what you think they'll do and what they actually do is where the insights live.
Month Two: The Feature Request Avalanche
Once people are using your product, they'll want more. A lot more. Every user has their own version of what your product should be, and they'll tell you about it. Enthusiastically.
"Can you add a calendar view?" "We really need an export to CSV feature." "What about a mobile app?" "Could you integrate with Xero?"
Each request is reasonable on its own. Together, they represent about three years of development work. The hardest part of running a product isn't building features — it's deciding which features not to build.
Here's my framework for evaluating feature requests:
- How many people have asked for this? One person is an anecdote. Five people asking for the same thing is a signal.
- Does this serve the core use case? A cleaning marketplace needs better search before it needs a blog. Stay focused on the core loop.
- What's the maintenance cost? Every feature you build has to be maintained forever. That CSV export needs to work correctly every time you change the data model. The calendar view needs to stay in sync. Features are not one-off costs.
- Can we validate it cheaply? Before building a full calendar view, can you export an .ics file and see if people actually use it? The cheapest way to validate demand is often a manual process, not code.
Month Three: The "Should We Pivot" Conversation
Somewhere around month three, if things aren't growing as fast as you hoped (and they almost never are), you'll start questioning everything. Is this the right market? Should we change the pricing? Maybe we should target a different customer segment?
This is normal. It's also where most MVPs die — not from technical failure but from founders losing confidence.
My advice: look at the data, not your feelings. Are people signing up? Are they coming back? Are they telling other people? If users are engaged but growth is slow, you have a distribution problem, not a product problem. If people sign up and immediately leave, you might have a product problem.
The worst thing you can do at this stage is panic-pivot. The second worst is stubbornly refuse to adapt. The middle ground — iterating on what's working, cutting what isn't, talking to users constantly — is where most successful products find their groove.
The Ongoing Reality
Here's what nobody tells you about running software:
It never stops needing attention. Dependencies need updating. Security patches need applying. Browsers release updates that break your CSS. Third-party APIs change their schemas. Running software is a living thing that needs constant care.
Technical debt accumulates. Those shortcuts you took to launch? They'll come back. Not as dramatic failures, but as friction — things taking longer to build than they should, bugs in unexpected places, developers spending time understanding workarounds instead of writing features.
The costs don't stop. Hosting, monitoring, email delivery, payment processing fees, domain renewals, SSL certificates, CDN, database hosting. Budget for £200-500/month minimum in operational costs, and that's for a small application.
You need a developer on call. Not full-time necessarily, but someone who can respond when things break. And things will break. At 2am on a Saturday, ideally. The production gods have a sense of humour.
How to Prepare
-
Budget for 3-6 months of post-launch development. The MVP isn't done when it launches. Budget at least half your original development cost for post-launch iteration.
-
Set up monitoring before you launch. Not after the first outage. Before.
-
Have a process for collecting feedback. Even if it's just a shared spreadsheet. Capture what users say, when they say it, and how many have said the same thing.
-
Accept that the MVP will change significantly. What you launch and what you have six months later will look very different. That's success, not failure.
-
Keep your developer close. The worst time to find a new developer is when production is on fire. Maintain the relationship with whoever built it.
The launch is a beginning. The real work — the work that determines whether your product succeeds — happens in the months that follow. Plan for it.
If any of this resonates, I work with founders directly — from early MVP planning through to scaling. No fluff, just practical engineering.
Get the Friday email
What I shipped this week, what I learned, one useful thing.
No spam. Unsubscribe anytime. Privacy policy.