In one of Horace Dediu’s old podcasts, he discusses with James Allworth, among other topics, the importance of measuring when a product has reached “good enough,” based on the current basis of competition. For a company, recognizing when you’ve reached this point is important, as incremental improvements to the product once you’ve hit this point is over-service, exposing it to low-end disruption.
Horace cites that not only has Apple been a serial disruptor, it has also shown evidence of being able to self-disrupt. He also states that over the last several years, he has struggled with trying to measure this point of good enough for the iPhone — there doesn’t seem to be any good public data points that suggests this. He goes on to explain that Apple however has a good built-in mechanism to detect whether they have reached this point with the iPhone: by concurrently selling the current version alongside the previous version of the product. The idea is that if Apple sees that consumers continue to opt for the previous version while the latest version is in the market, this would serve as a signal to them suggesting that they’ve hit good enough.
This was interesting to me and I started thinking: Do other companies actively do this? How does this overlay with software companies?
My mind then quickly jumped to Software as a Service (SaaS) companies: Does the practice of automatically delivering the latest versions of software deprive them of the valuable of data of knowing when they’ve hit good enough? Is the model itself more susceptible to disruption because of this blind spot?
In Timothy Chou’s book, he explains how cloud product vendors are motivated to minimize the number of branches kept in maintenance (optimally, a vendor would only have one). By having minimized the number of branches available in the market, the vendor would have reduced costs of maintenance, putting them in a better position to scale in a financially viable way. However given this model, how would a vendor know that their product has hit good enough? There is no opportunity for buyers to decide between the latest version and the previous version.
Would tracking sales for premium tiers and add-ons provide good signals? Vendors would know whether incremental features were valuable enough for customers to pay additionally for them, but they still would not know if the base platform has hit good enough. Additionally, the nature of competition may make it hard to structure a product like that. There will be a tendency for competitors to compress premium features into the base product.
I came back to the question: Do other companies do this? In fact, I found that Microsoft does this with its Windows product.
Microsoft has built-in overlap between up to three versions of Windows. Looking at the lifecycle fact sheet, we see that there was at least 18 months between the launch of Vista and the end of retail sales of XP. Do you think Microsoft was able to measure relative success of Vista vis-a-vis XP during that timeframe? How about Millennium Edition vis-a-vis Windows 98?
Another interesting thing to think about: If product managers had access to this information — that their product has reached good enough — what do you think they would do with it? I would suspect the notion of reaching good enough would create an existential conflict within product managers and product leaders, leaving them thinking that identifying that point would mean their jobs are at risk.
Given all this, do you think companies have the right measurements in place to do this analysis? If they did, do you think the organizational makeup would support self-disruptive behavior?
My Take: This thought process and discipline is still making its way through American businesses. However, I think that self-disruption will have a hard time sticking in businesses that focus so much on hitting quarterly revenue targets. This bodes well for new entrants–not so much for inflexible incumbents.