Bluebonnet Hill Golf Club, Austin TX
Bluebonnet Hill Golf Club, Austin TX

When Data Talks, What Do You Hear?

Whac-a-Mole game

January 25, 2013.  A board meeting yesterday for a P2P marketplace venture swung to a focus on what field-testing data is really telling us.  Creating a P2P company requires sellers, buyers, and actual transactions.  Like any consumer venture, there are plenty of unknowns in consumer behavior when a shiny new object is made available to them.  Trying to interpret each week’s worth of data can be a lot like Whac-a-Mole.  A/B testing becomes more like A-Z testing when all the variables are factored in.  You think you understand one side of the equation, but you tweak the other side and get very unexpected results.

Sometimes the consumer makes pretty clear-cut decisions.  I just changed from Progressive to Allstate auto insurance.  Flo had been progressively increasing my rates, to the point that if I were leasing my vehicle I believe the insurance cost was about equal to that hypothetical lease payment.  When renewal time came along, I found that Mayhem was literally about half the price for the same coverage.  Every such case is unique, but the Progressive cancellation email clearly noted that I had left because of price.  The data is clear.  (Or if you are a traditionalist and choose to ignore modern style guides, the data are clear.)

Here are some things you can do to generation actionable information from the data you are collecting:

1.  Ask your consumers why they did certain things.  If your base is small enough, you can survey them and see if you can detect honest patterns.  Honesty is the key here.  Sometimes people will make up reasons that they think sound better and won’t admit their actual intentions.  Or, if you are using something like Survey Monkey, you may have not given them the right choices and have forced them to pick something at random.  And, whatever you are selling is very important to you but may be only incidental to the daily lives of your users, so don’t expect them to dwell on the meaning of their casual decisions.

2.  Look at the extraneous factors.  One of the student teams this semester in the Longhorn Startup Lab is focused on forecasting for small businesses.  They are trying to help SMB owners factor into their plans external events, e.g. the circus is in town, and to help them understand cyclical factors from past months, like sales are always up on days ending in 4.  If they are dependent on rankings in local business listings, they have another factor that can be very critical.  And, if they are reviews driven, those too must be taken into account.  Nothing around them is static, their direct competitors are doing what they can to get higher rankings and better reviews, and many of them are not above gaming the system any way possible.  With all these moving parts, this team has taken on an interesting challenge for the semester.

3.  Remember that you’re testing in a league where the rules aren’t constant.  With consumer behavior, it’s like playing football where some days the field is 100 yards long and the next it’s 125 yards.  You can’t assume a neat, orderly environment for your product launch.  Even if you do have the first-ever, unique, blows-away-the-competition product, consumers do have substitutes for it.   And, as noted in the previous point, your competitors aren’t going to sit around and let you take a clean shot at them.  So, whatever data you are gathering, say $2.12 to acquire a user, bear in mind that next week that number may be $12.12 when you’ve stimulated some reactions.  Apple seems to be experiencing the effects now of creating its own competition as consumers are choosing earlier and cheaper iPhone 4 and 4S models over the swank iPhone 5, especially in emerging markets.  The price of such incredible innovation may have exceeded what consumers can afford, and the stock market has hammered APPL this week as a result.  Investors are interpreting this data pretty negatively.

4.  Narrow your testing field as best you can.  As I’ve written previously, consumer packaged goods companies like to test in small geographic markets where they can afford to create share of mind and can dominate shelf space.  If the data tells them their product will sell at certain price points and on the basis of certain promotional strategies in a village, chances are good the model can be scaled successfully into a city.  The key is finding a test market and environment where you can create a repeatable and reliable process, and with that you can turn the crank and start growing with some assurance and predictability.  With software products, especially apps where just exposing yourself in an app store can bring you customers scattered thinly across the planet, it’s far more challenging to figure out what’s going on if you start missing your goals.  Your data may devolve to “it’s not working” and hardly be more useful than that.

5.  Finally, look inward.   Your product may need tweaking to be successful.  You may be hearing all kinds of things in the data about why your users did this or did that, but there may be a technological fix or a UI/UX modification that can have a big impact.  You may well be able to steer your customers along a better path by subtle changes to your product.  So, never assume that you’ve nailed the design and that only your marketing requires adjustment.  All go hand in hand.

And with that, may all your data be happy.

(Buy the pictured Whac-a-Mole game at Sears.>