First off, my apologies to actual Decision Scientists. I have no formal training and just recently learned that the area I'm fascinated by actually has a name.
There are a lot of anecdotes out there about how wonderful all the different Lean Startup methodologies are. If you go and read the Amazon book reviews, you'll see lots of comments about how it changed someones life.
What you won't see is any data showing they actually help.
I'm a skeptic at heart. Studies on businesses are notoriously difficult to do, and while I could see value in the tools and techniques, I really wanted a deeper scientific basis for all the hype.
I finally managed to find it within the Decision Science literature. Daniel Kahneman has done a great survey of key ideas and concepts in "Thinking, Fast and Slow". It's a long book, but for the purposes of this discussion we really care about part 3: Overconfidence. "The Signal and the Noise" by Nate Silver also talks a fair amount about our inability to do forecasts well. Lastly, "Naked Statistics" provides another view.
The Lean Startup Methodologies (LSM) are designed to help you answer two questions:
1) Should I even bother building a full product?
2) If I do build the product, should I continue with incremental changes, or pitch the whole thing and start over?
As an entrepreneur with a product or business idea you are going be subject to three different cognitive illusions. These are things that are common to all of humanity, and there is little you can do to train them away. The three illusions I see as being the most problematic are Optimism Bias, Domain Expertise Confidence; Confidence in Prediction in an unstable environment.
Optimism Bias
Humans in general tend to think they perform better then average. In particular, entrepreneurs tend to be even more optimistic then the general population about their ability to beat the odds. For instance the base rate of failure for a new business is 65%. But the vast bulk of entrepreneurs consider their chance of failure at 40%. This delusion can help us make it through the day, but it can also cause you to stick with something for far too long. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 256). Farrar, Straus and Giroux. Kindle Edition. )
Domain Expert Overconfidence
Everyone is over-confident about their ability to predict the future. Doesn't matter how much of an expert you are in your field, you are overconfident. Time and time again we see research that shows a crappy mathematical model that is informed by expert opinion is almost always better than either one alone. (http://www.rff.org/Events/Documents/Lin_Bier.pdf, all of "The Signal and the Noise").
The second part here is apropos to a problem I'm wrangling with at work. I'm working on a product that is all about market creation. By definition, my ability to research and learn about my market is terribly limited since the market doesn't exist. Learning the Lean Startup tools improves your metacognitive ability to see your own weakness in expertise and therefore adapt to it. (http://gagne.homedns.org/~tgagne/contrib/unskilled.html)
Confidence of Prediction
When you ask yourself "Do I need this feature in my product?", the question you are really asking is "Will adding this feature to my product add enough value to my business in the future to justify the (opportunity) cost now?". That is attempting to forecast the future, something humans can be good or bad at depending on how quickly they get feedback about their decisions. Firefighters and nurses, who get almost instantaneous feedback on the quality of their forecasting are able to build a great amount of skill in this area. Those of us running in longer time scales fall prey to building confidence in our predictions, but not actually improving our accuracy. Think of how much that sucks for a moment. You can be in a field for years, making forecasts, assuming you are getting better, but in reality, you aren't. (Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (p. 240). Farrar, Straus and Giroux. Kindle Edition.)
Essentially it all boils down to the fact that you are going to feel really confident about your idea and plans, but that confidence is not based on actual hard data. It is instead an illusion being fed on how we perceive and think about the world. You can't use your gut 'Confidence' check to know if you really have a product or not at the get go, we're just not wired that way.
But by acknowledging our limitations, we can figure out ways to work around them.
So how do you deal with this trifecta of Overconfidence in your idea?
To quote Steve Blank: "GET OUT OF THE BUILDING" and go talk to customers.
That's it.
All the different LSM out there have different suggestions on how to go about defining a market segment, finding customers, and how to talk to them. But it boils down to talking directly to customers to counteract your overconfidence. They all reccomend a progressive approach in how you do your research. You start with wide open investigational interviews, and as you learn more (and validate or invalidate your overconfident beliefs) you start using more structured interviews to gather more accurate data (but less wide ranging) including things like surveys and prototypes. You can even run A/B value prop testing with a mock web site and google adwords.
That said, you need to be cautious about over-generalizing your results. We humans love to see patterns and over value things we see and measure, even if they have low confidence.
(Patrick Leach (2006-09-15). Why Can't You Just Give Me The Number? (Kindle Locations 1911). Probabilistic. Kindle Edition. )
Launching - Answering the "Persevere or Pivot" question
These pre-launch experiments have a whole range of cost, accuracy, and specificity. On the low end you have informal unstructured interviews. This is great for proving things our early on, and also allowing you to find a better business idea then the one you thought of originally. On the high end of cost and complexity, you have large scale polling that you do right (i.e. random distribution of customers, proper question wording to avoid bias, etc). These aren't going to find you that better business idea, but can provide a very accurate and specific answer to the question you pose.
At some point though (and this will be very specific to you and your circumstances) you'll need to sit back and decide to stop pre-launch experments. When do you stop? When failing in market will be better then continuing to run experiments. For example, if you have a large existing business (lets say $100M in annual revenue) that you are thinking of disrupting with a new revenue model, you probably want to do the more formal methods since spending $50,000 on a polling firm and taking the time is small relative to the risk. But a 6 person startup with no actual revenue yet? Depends on the size of your market. If you only have 100 customers, you may want to do more upfront work because running experiments on customers can cause them to get grumpy and leave. But if you have a larger target market, it's fine to lose some customers while you work things out. If you are looking for a more decision science based approach, see Chapters 11 and 12 in Patrick Leach (2006-09-15). Why Can't You Just Give Me The Number?. The different market sizes account for the different approaches in the LSM books.
Once you get a product in market, you are still subject to the same overconfidence illusions around forecasting. This is where the second part of the LSM stuff kicks in: Analytics and Rapid iteration.
I'm totally thrilled to watch market after market get disrupted by rapid prototyping. On the hardware side we had FPGA's come along in the 90's that allowed really interesting products to be built without the capital outlay needed for an ASIC. On the SaaS side, AWS/DevOps/Harware as software movement has added nimbless to that field. Outside of computing, the revolution around rapid prototyping, 3D printing, and cheap CNC tools (like CNC plywood routers) has drastically changed things. Even the repatriation of hard goods manufacturing is occuring because it allows businesses to iterate faster (http://www.nytimes.com/2011/10/13/business/smallbusiness/bringing-manufacturing-back-to-the-united-states.html?pagewanted=all ).
How can overconfidence get you after launch? Go read the opening chapters of "The Startup Owners Manual" to learn about how Webvan's overconfidence caused them to ignore the metrics they were getting and fail big.
The steps at this point are:
1) Ship iteration of business (this includes ad copy, market segment, marketing webite and materials, actual product)
2) Observe behavior using quantitative metrics
3) Use that to drive qualitative discussions with customers
4) Make a hypothesis and modify product/web site/ad copy
5) Repeat
It's easy to get analytics wrong. Eric Ries labeled these 'Vanity Metrics'. These are metrics that are pretty much guaranteed to give you the answer you want (generally up and to the right). But much like qualitative interviews, there is a broad spectrum of accuracy and complexity around implementation. For that first launch you don't need much. Just a retention chart that is keyed off the activity that drives your engine of growth is enough to shake your confidence. You are looking for analytics that help you detect the huge problems in your overconfident assumptions. You aren't at the point where you care about 3% improvement in a number or running a linear regression on your data.
Don't know what metrics to track? Grab a copy of Lean Analytics (http://leananalyticsbook.com/). They breakdown a large number of different business models and what you should be looking at to decide if you should throw in the towel or not.
How quick should your iterations be? As quick as possible without pissing off your customers or partners. For instance, if you are growing rapidly you should iterate quickly (daily even?). As a portion of your customer base, those people irritated by all the change will always be a shrinking proportion of your total base since you are getting new customers at a very fast rate. I personally (overconfidentaly and untested of course) think you need to be willing to lose your early customers and therefore shouldn't worry about them.
One last moment of reflection. These are all really cool tools. But if you've already decided on a course of action, the value of any new information may be zero since it will not change your mind. In that case, just make your decision and go on. I like these tools (in particular stochastic modeling), but in all honesty, if you crack open Bayesian theory and run the numbers, they only help increase your odds of a good outcome by a small amount. This is due to the huge amount of raw luck and chance that exists in the world. A lot of this is outside of our scope of control (I feel for everyone who launched a new business right before the great recession).
So have fun and enjoy yourself!
Books:
"The Lean Startup"
http://theleanstartup.com/
"Lean Analytics"
http://leananalyticsbook.com/
"Thinking, Fast and Slow"
http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
"Naked Statistics"
http://www.amazon.com/Naked-Statistics-Stripping-Dread-Data/dp/0393071952
"The Signal and the Noise"
http://www.amazon.com/The-Signal-Noise-Many-Predictions/dp/159420411X
"Why Can't You Just Give Me The Number? …Guide to using Probabilistic Thinking to Manage Risk and to Make Better Decisions"
http://www.amazon.com/Guide-Probabilistic-Thinking-Decisions-ebook/dp/B0029F2STA
"The Startup Owners Manual"
http://www.amazon.com/The-Startup-Owners-Manual-Step-By-Step/dp/0984999302