Pages

Live blogging Feed

Friday, September 28, 2012

Is the iPhone good enough?

Is the iPhone good enough?:
We don’t want to just make a new phone. We want to make a much better phone.
- Jony Ive, video at iPhone 5 launch event
Disruption theory has taught us that the greatest danger facing a company is making a product better than it needs to be. There are numerous incentives for making products better but few incentives to re-directing improvements away from the prevailing basis of competition.
This danger is more acute for technology companies. Coupling incentives with the speed of improvement in various technologies (aka Moore’s law) means that over-service can come suddenly and more quickly than warnings from the marketplace. A product can tip from under- to over-shooting the market within one product cycle. One year the product is under-performing and trying to catch up to the competition and the next it’s superfluous and commoditized. The dilemma is compounded by the cycle time of development which can span multiple product cycles.
Therefore, how to tell whether a product is over-serving a market is one of the most important and frequently asked questions I get asked. It’s easy to see over-service in the rear view mirror when looking at a multi-year pattern. The trouble is that by the time you see the data, it’s too late. How do you tell you’re on the cusp of good enough, subject to imminent disruption before you get there?
I consider measuring a product’s absorbability to be a marketing problem. The marketer’s job is to read the signals from the market[1]. Determining absorbability comes down to reading two market signals, both of which must be met before green-lighting an improvement:  (a) a product’s improvements must be used and (b) a product’s improvements must be valued.
If a product’s improvements are not used and the buyer will not pay more for them then they are not being absorbed and the effort to develop the improvements should be redirected.
Now the problem becomes one of measurement. Of the two, utilization is easier. Data can be gathered on whether a feature is being used. Research methods exist to tell if a feature would be used even if it’s not available[2]
The more difficult assessment is that of the value of a feature. You can usually only tell value by trying to price it and watching what happens. For example, you add more speed/memory/capacity and try charging more (or the same) for the product. The acceptance will be measured by sales growth and will give you an indication of whether these improvements are valuable.
If you have to add features and drop prices at the same time then it’s likely that the market does not value the improvement.
But this is extremely risky. You need to wait through a sales cycle and iterate through a development cycle before you have an answer. In a space where competitors are placing opposite bets, the experiment fails even if you get the data.
How can you structure a value measurement experiment without wasting an opportunity?
Rather than dealing with hypotheticals, let’s use the iPhone as a test case. As Jony Ive states, the focus for the latest iPhone was to make it better. Is this improvement absorbable? What happens if Apple’s bet on being better is wrong?
First, we can confirm that the iPhone has been on a trajectory of getting better and that those improvements have been absorbed so far. We can measure the history of performance of the product (roughly doubling every year) and we can also measure proxies for performance as I have in the following charts:


As the product has been improved along these dimensions, sales have increased and prices have held steady (even rising occasionally.)
The question is about the future: what about the latest “5″ variant?
The clue to this experiment is the presence of a control group.  We could test the question of absorbability by keeping a version of the product which did not improve (or got cheaper) and measuring whether it performs better vs. the “improved” version.
Of course, this is exactly what Apple does with the n-1 generation products. By ranging products which are older and at lower price points it can measure whether the improvements are valued.
If sales of the n-1 variant were to increase relative to the new version then they can understand when they are at the point of good enough. The experiment is brilliant because the margin on the older products is maintained even at the lower price point.
We don’t have public data on the performance of the old vs. the new but some studies show that, at least in some markets, the older variants have, so far, been a minor part of the sales mix. The CIRP study from early this year showed that about 90% of holiday iPhone sales in the US we for the latest (4S) variant. If this pattern persists globally and for the 5 then the improvements can be said to be valued.
If the new features (as represented by the metrics charted above) also get broad engagement–data which Apple can easily obtain–then the iPhone 5 can be declared not good enough. The company can then comfortably work on improving it further.

  1. Note that the marketer’s job is to listen not to talk.
  2. e.g. contextual inquiry.




No comments: