Monthly Archives: October 2013

Last updated by at .

Serving up Different Websites on My Local Network

I manage a number of different websites and most of the development has been done offline on a Windows 7 machine. I use the LAMP stack for all my sites and have always found the easiest method of setting up an equivalent WAMP stack on a Windows 7 machine was using XAMPP by Apache Friends. This has always worked fine but was a bit clunky when I wanted to change which website I was working on. It meant killing the Apache processes (httpd.exe) in the Windows task manager, editing the Apache conf file to change which site I was working on and then re-starting Apache. And when crap programs like Skype keep grabbing port 80 restarting Apache is always a pain in the butt.

There had to be an easier way so this week I took an hour out of my day to work out what it was. I already had Apache installed on my mini file server that runs Ubuntu so it was just a matter of getting that to work properly to serve up different sites. And that was (surprisingly) simple.

Edit the Hosts File on the Server

First step was to edit the hosts file on the linux machine with

sudo nano /etc/hosts

And then add entries for the sites I wanted to be served up locally. So, my hosts file ended up looking something like this:

127.0.0.1       localhost
127.0.0.1       local-site-1
127.0.0.1       local-site-2

Create Sites Available Files

I then had to create files for each site to sit in Apache’s sites available directory (/etc/apache2/sites-available/)

These files are pretty simple and in my case looked like this:

<VirtualHost *:80>
    ServerName local-site-1
    DocumentRoot "/srv/path/to/local/site/1/html_root/"
</VirtualHost>

Just change the server name to your local site name and the DocumentRoot to the path where the files for the site reside. In my case DocumentRoot is a SAMBA directory that is accessible from my windows machines (so I can edit the files from my dev machines). Name each file sensibly (in my case I named them local-site-1 and local-site-2).

Enable Sites and Reload Apache

Enabling the new sites is simple, just use the following command:

sudo a2ensite /etc/apache2/sites-available/local-site-1

Then reload the apache configuration with:

sudo /etc/init.d/apache2 reload

Edit the Windows Hosts File

The final step is to edit the hosts file on the Windows machines you want to access the local sites. On Windows 7 this can be found in:

%systemroot%\system32\drivers\etc\

I opened this in Notepad and changed it to look like this:

127.0.0.1 localhost
192.168.2.3 local-site-1
192.168.2.3 local-site-2

Note that 192.168.2.3 is the IP address of my file server.

Now if I want to access one of these local sites in a browser I just need to type local-site-1 into the address bar and hey presto I see the local copy of the website served up by my file server. I love increased productivity!

Potential Improvement

One potential improvement to this process is to remove the need to edit Windows hosts files by installing a DNS server (like dmasq) that will resolve the local site URL’s into an IP address. Of course this would require changing the DNS settings on the Windows machines.

Trouble with Google Experiments

I’ve been using Google Experiments (GE) pretty heavily in the last year and the method it uses to send traffic to landing page alternatives has always confused me a little. So yesterday I did some searching and it turns out that GE uses a “multi-armed bandit” method a splitting up traffic between alternatives. Basically this means that each experiment starts with a short evaluation period when traffic really is split up 50/50 (or whatever percentage you choose). After the evaluation period that the conversion rates of both alternatives are measured and more traffic is sent to the higher converting page. This evaluation is carried out a few times a day and the traffic split is adjusted accordingly. The reasoning behind this is supposedly two fold:

1. It minimizes the effect on overall conversion for the period of the experiment if one of your alternatives is particularly horrible.

2. It can lead to a much faster 0.95 confidence result especially when one alternative performs much better than the other.

Figure 1 : Conversion Variation

Figure 1 : Conversion Variation

With low traffic pages (say 100 visitors or less per day) if one of the alternatives happens to have a really good first day or two then you can end up with 90% of the traffic going to that and 10% going to the other. And these really good and bad days DO happen, it’s the nature of random variation and small sample sizes. I often see pages that have average long term conversion rates of 15% having 2% or even 0% conversion days. In the figure above you can the light blue line shows the conversion rate of one of my landing pages over a 30 day period. The rate varies quite randomly between less than 5% to more than 25%.

So imagine the situation where the initially poorly performing alternative is getting just 10 or so visits per day and normally has conversion rates of <10%. It can easily be DAYS before it has another conversion. And each day Google is evaluating that performance and sending the the page LESS traffic. Often this is just 1-2 visitors a day. So failure is basically assured. [caption id="attachment_243" align="aligncenter" width="300"]Figure 2 : Traffic Distribution During Experiment Figure 2 : Traffic Distribution During Experiment[/caption]

The net result of this is that the last week or so of an experiment often virtually ALL of the traffic is being sent to one alternative if it happens to perform better in the first few days of an experiment. And I’ve now seen this a number of times. I haven’t done the math on it but looking at my results I’ve had experiments conclude with a winning result getting 10-20x the traffic of the failing result. You can see and example of this above. The winning alternative (the orange lines) showed a great conversion rate in the first few days which resulted in less and less traffic being sent the alternative (the pale blue line). In fact, for the last 10 days of the experiment the poorer performing alternative received almost no traffic. At the end of the experiment the winning result got 1244 visits for 281 conversions and the alternative got just 212 visits with 18 conversions. To my mind 212 visits just isn’t enough to be statistically significant and certainly not enough to declare a conclusive winner.

It turns out that there’s a an advanced setting buried in the Google Experiments advanced settings called “Distribute traffic evenly across all variations”. This is OFF by default and needs to be turned on to ensure that the experiment actually uses the 50/50 traffic split (or whatever % you choose). My feeling is that it’s hazardous accept the result of just one GE that uses the multi-armed bandit method. Especially for low traffic landing pages. Multiple experiments are required. Of course this should be true of any A/B test. I also think that if you’re evaluating low traffic pages then you should conduct your experiments using the true 50/50 traffic split and compare those with the multi armed bandit method.

As an addendum here’s someone who takes a contrary view to the rosy view presented by Google in the page I linked to above. The folks on Visual Website Optimizer do not think multi-armed bandit is better than regular A/B testing.