Category Archives: mISV

Trouble with Google Experiments

I’ve been using Google Experiments (GE) pretty heavily in the last year and the method it uses to send traffic to landing page alternatives has always confused me a little. So yesterday I did some searching and it turns out that GE uses a “multi-armed bandit” method a splitting up traffic between alternatives. Basically this means that each experiment starts with a short evaluation period when traffic really is split up 50/50 (or whatever percentage you choose). After the evaluation period that the conversion rates of both alternatives are measured and more traffic is sent to the higher converting page. This evaluation is carried out a few times a day and the traffic split is adjusted accordingly. The reasoning behind this is supposedly two fold:

1. It minimizes the effect on overall conversion for the period of the experiment if one of your alternatives is particularly horrible.

2. It can lead to a much faster 0.95 confidence result especially when one alternative performs much better than the other.

Figure 1 : Conversion Variation

Figure 1 : Conversion Variation

With low traffic pages (say 100 visitors or less per day) if one of the alternatives happens to have a really good first day or two then you can end up with 90% of the traffic going to that and 10% going to the other. And these really good and bad days DO happen, it’s the nature of random variation and small sample sizes. I often see pages that have average long term conversion rates of 15% having 2% or even 0% conversion days. In the figure above you can the light blue line shows the conversion rate of one of my landing pages over a 30 day period. The rate varies quite randomly between less than 5% to more than 25%.

So imagine the situation where the initially poorly performing alternative is getting just 10 or so visits per day and normally has conversion rates of <10%. It can easily be DAYS before it has another conversion. And each day Google is evaluating that performance and sending the the page LESS traffic. Often this is just 1-2 visitors a day. So failure is basically assured. [caption id="attachment_243" align="aligncenter" width="300"]Figure 2 : Traffic Distribution During Experiment Figure 2 : Traffic Distribution During Experiment[/caption]

The net result of this is that the last week or so of an experiment often virtually ALL of the traffic is being sent to one alternative if it happens to perform better in the first few days of an experiment. And I’ve now seen this a number of times. I haven’t done the math on it but looking at my results I’ve had experiments conclude with a winning result getting 10-20x the traffic of the failing result. You can see and example of this above. The winning alternative (the orange lines) showed a great conversion rate in the first few days which resulted in less and less traffic being sent the alternative (the pale blue line). In fact, for the last 10 days of the experiment the poorer performing alternative received almost no traffic. At the end of the experiment the winning result got 1244 visits for 281 conversions and the alternative got just 212 visits with 18 conversions. To my mind 212 visits just isn’t enough to be statistically significant and certainly not enough to declare a conclusive winner.

It turns out that there’s a an advanced setting buried in the Google Experiments advanced settings called “Distribute traffic evenly across all variations”. This is OFF by default and needs to be turned on to ensure that the experiment actually uses the 50/50 traffic split (or whatever % you choose). My feeling is that it’s hazardous accept the result of just one GE that uses the multi-armed bandit method. Especially for low traffic landing pages. Multiple experiments are required. Of course this should be true of any A/B test. I also think that if you’re evaluating low traffic pages then you should conduct your experiments using the true 50/50 traffic split and compare those with the multi armed bandit method.

As an addendum here’s someone who takes a contrary view to the rosy view presented by Google in the page I linked to above. The folks on Visual Website Optimizer do not think multi-armed bandit is better than regular A/B testing.

Dealing Gracefully with Customer Price Objections

Every day I deal personally with emails from new users of my software, existing users, and prospective users. One topic that comes up from time to time is people complaining about the cost of the software (new users), the cost of upgrades (existing users), and having to pay for on-going technical support (existing users). Just because I don’t really get many emails like this (perhaps 1 a week, perhaps less) doesn’t mean I haven’t spent some time and effort working out the best way of handling them. Not surprisingly, I’m not the only mISV to have this problem as someone has been posting about it over on The Business of Software forum. Some of the advice contained therein is pretty good. Some of it (in my opinion) isn’t.

I thought I’d take a look at the best (and worst) responses in the thread. Thread responses are posted as is (no typos have been fixed).

You don’t owe an a customer a lower price unless you promised it. I’d tell the customer, as nicely as possible, that the price will remain as is, and that there may be other products that would meet that customer’s needs better

I like the first sentence of this response. As an mISV you do not have to honour earlier prices unless the customer has been promised access to the lower price. I do not like the second sentence. It’s basically encouraging the customer to look elsewhere. Effort should be made to retain the customer as every customer is a potential source of on-going revenue.

I would give your customer a discount coupon (say 50%) to make them happy and get the sale. Selling at a lower price is better than not selling at all and this is just a one-off. Your customer will be happy and maybe even recommend your software to others because of it. Plus it’s another person you can charge for upgrades in the future.

This is a decent approach. It makes an effort to keep the customer happy, it gives you an opportunity to secure a sale and cement a relationship with the user. It also suggests on-going intangible benefits such as word-of-mouth marketing. I’ve used this approach many times in the past and while I don’t have any hard statistics it does secure the sale in the majority of cases.

I wouldn’t offer him a discount at all — but I love standing my ground, and there’s nothing like sales being made at the higher price to prove that it’s the right price.

Admirable and pointless. If you’re an mISV then your likely to want every sale you can get. If you’re selling dozens of licenses a day then a few lost sales are not likely to bother you. The only time I’d completely dismiss a potential sale like this is if the customer has proven to have a high cost of ownership and that it would be more cost-effective for you to fire that customer than retain them. Out of the 10,000 odd companies that use my software I’d guess I’ve “fired” a customer perhaps 10 times. So the customer would have to be exceptionally bad to want to write them off.

Never do anything for free.

Ask the user if they would be willing to send you a postcard from or pictures from their location in return for a discount.

I like this suggestion. It hearkens back to the postcard-ware days of software and forces the customer to do something in return for the discount. Thus it forces them to assign a personal value to receiving the discount. My only suggestion is that while making the customer do something is great I do not think that receiving the postcard is something that benefits the business long term.

I think the best way to handle this is just to politely write back the customer and say something like, “The truth is that the product isn’t sustainable at the cheaper price. The revenue does not yet cover the development cost, so I have to make more money. My strategy is both to slowly drive up the value (and price) of the product by adding more features, and to slowly expand the potential market and audience for the product with those additional features. “

There’s a few posts that suggest something along these lines. This is a terrible approach. You do not need to reason with your customers nor provide them with a story about your business. In most cases they simply won’t care. After all, as a business owner you don’t particularly care about why the customer wants your product cheaper so why on earth would they care why you want more for it?

Ask them for a 300 word testimonial about how they use your product and why they find it useful. Ask for permission to publish it on your website along with their name and a link back to their website. If they comply give them access to the old pricing.

Full disclaimer. That’s my response and (not to blow my own trumpet) it’s the best one in the thread. It demands some effort on behalf of the customer to get the discount, it gives the mISV some material that will help build credibility when it is published on their website, and it helps to secure an on-going relationship with the customer. And does it work? Absolutely, I’ve been very forward with with my users in the last 12 months asking for testimonials. And you know what? Almost all of them have been absolutely happy to provide one and allow me to publish it with their full name. It’s by far the best way of handling customer price objections gracefully.