What exactly is an MVP? The answer is: it depends. It depends on the stage of your business, and where you are with your product. If you're just starting out, an MVP might be little more than a pitch deck. If you've got early adopter customers, then your MVP might be a working prototype.
The point is that an MVP (Minimum Viable Product) is designed to answer a very specific question at each stage of your product development. What is that question? We'll get to that.
It's a process, not a product
In Lean Startup, Eric Ries talks about the "build-measure learn" feedback loop.
The process is jump-started by an entrepreneur's vision: yours. You see the world as it should be, not necessarily as it is. The world has problems. You see your product as the solution, at least to some of these.
One of the reasons most startups fail is because they start too big. Not that the dream is too big, or the idea is too big, but the MVP is too big. They spend too much money and time building something before validating that the product they're building solves a real problem, and that people are willing to pay for it.
Building things accrues risk. The bigger the MVP, the more time and money spent on it, the more risk is involved. As entrepreneurs part of our job is to mitigate risk. But experimentation is inherently risky.
At the earliest stage of a product we're not necessarily trying to build exactly the right product. Rather, we're trying to learn if the market actually has the particular problem that we're proposing to solve.
What is the simplest, smallest thing that you can build? Something that you can measure, learn from, and then apply to the next iteration of the cycle. Could it be a survey? A pitch deck? A blog or ebook?
What can you get into the hands of your potential customers right now that you could measure? What did your survey responses tell you? How many people visit your blog on a monthly basis? How many people downloaded your ebook? What was the feedback from your pitch deck?
Now, what can you learn? More importantly, how can you apply your learning to the next phase of building?
In reality this process should never end. Long after the corks have been popped version 1.0 has been released, a healthy product team will still be applying MVP principles to ongoing product development.
Stair-step Your Way to Launch
Most successful products on the market today did not begin life as a complete, usable product. Many did not even resemble the product that they are today. FaceBook, for example, started out as a "hot or not" style dating app for Harvard college students.
At each step of the way, collect what you've learned and apply it to the next stage of development.
Here's an example of a stair-stepped launch:
- Initial idea is pitched verbally to some people who may be interested. Their feedback is taken and turned into…
- A slide deck that can be used to demonstrate the rough idea. This deck is pitched at a local startup meetup, and the feedback is used to create…
- A simple landing page to test out the messaging and collect email addresses for early adopters. This landing page and the messages that are developed become the basis for
- A blog where you talk about the idea and begin to set yourself up as an expert in the field.
- Wireframes that show visually how the app will be put together.
- A trimmed-down private beta version that demonstrates the basic necessary version of the app.
Make educated guesses
The process of creating an MVP is a quasi-scientific process: we look at the world and make an educated guess—a hypothesis—about how we can make it better. Then we test that hypothesis. We create a minimum viable product: a small product designed to see if we're right or wrong. If we're right then we double down; we keep building that product. If we're wrong then we pivot: we try a different approach. We do that again and again until we have discovered something that really works: a product that solves a real problem and that people are willing to pay real money for.
Apply learning and do it again. And again. And again.
At each step of the validation process we collect data. We're looking for data that will prove or disprove our hypothesis.
Data will help us answer these questions:
- Will users use this product?
- Does it solve a real problem?
- Will users pay money to solve this problem?
At a very early stage the data that we collect may be anecdotal rather than empirical. At the very beginning we may only have our own experience to look at - what does it feel like to use the app, where do we get hung up, what kind of usability issues do we run into. As we progress we add more anecdotal data: what do other users do; what happens the first time someone has to use the app. And as the app matures we will begin adding empirical data: how many users are able to successfully complete tasks, how many fail, how many pay for the app, how many churn, etc.
Each of these data points help us to refine our approach as we build your app. If we wer just to build your app to spec we wouldn't worry about these things. But our goal isn't to build your app to spec; our goal is to make your users successful at using your app. To that end we'll collect all the data that we can, and guide you in collecting your own data that we can feed back into the development process.