Analyzing Split-Tests Using R
You don't have to be a data scientist or a statistician to analyze a split test using R. The most challenging part is selecting an appropriate sample size, which requires understanding split testing parameters. If you stop a split test immediately after observing the desired result, you can introduce bias into your experiment. Stopping after collecting an appropriate sample size helps to ensure that you are making unbiased decisions within the error bounds you are comfortable with.
In the following examples, let's assume that we are trying to optimize the conversion rate of a sales funnel. We will refer to our existing sales funnel as A and the changes we are split testing as B. When calculating sample sizes, it's helpful to think of the conversion rates of A and B as random variables with a range of possible outcomes. Thinking of conversion rates in this way allows us to include two error parameters in our sample size calculation:
- False positives (α): incorrectly concluding that the conversion rate of B is better than A.
- False negatives (β): incorrectly concluding that there is no difference between the conversion rate of A and B.
You can pick the values of α and β you are comfortable with, but it's common to set α to 5% and β to 20%. Note that α is more commonly known as the significance level, but its value is the same as the false positive rate. Note that 1-β is called "statistical power," which you will see later in this post. You can also think of this as the "true positive" rate.
Another important parameter is the "minimum detectable effect," or MDE for short. As the name implies, this is the minimum percent difference between A and B that you will be able to observe and conclude that the conversion rate of B is better than A. The important thing to understand is that the smaller the MDE, the larger the required sample size. This is because detecting small changes is more difficult than larger ones. Therefore, a common practice is to set the MDE to 5% or more.
The final parameter you need to calculate the required sample size is the existing conversion rate of A. You can obtain the existing conversion rate of A from business metrics. There can be a lot of variability in conversion rates, so it's best to pick a lower value. A lower value will result in a larger than required sample size, which is better than having too small of a sample size.
Once you have the required parameters, you can input them into a sample size calculator. Evan Miller has an excellent sample size calculator on his website and additional split testing resources. You can also calculate the sample size using the "pwr" package in R, as shown below. Note that Evan Miller's calculator performs the calculation for two-sided tests, but the example below performs the calculation for a one-sided or "greater" test.
As you can see, the required sample size (n) is 20,150. This means 20,150 for A and 20,150 for B, for a total of 40,300.
After running your experiment, analyzing a split test in R is easy as inputting the results into the "prop.test" function, which is included in the standard library.
The values of interest from the output are the sample estimates, the p-value, and the 95% confidence interval. The sample estimates are simply the conversion rates for B and A, respectively. Without getting into too much detail about what p-values are, the outcome is generally considered statistically significant if the p-value is lower than α or 0.05 in this case. Statistically significant simply means that there is enough data to support the conclusion that there is a difference between the conversion rates of A and B. Don't forget that there's still a small chance equal to α that the result could be a false positive. The 95% percent confidence interval is the 95% confidence interval on the observed difference between B and A. For instance, we observed approximately a 1% difference between B and A. However, the difference could be as low as 0.2% or as high as 1.8% when considering that the conversion rates are modeled as random variables.
Below, I have included an example of what the output would look like if there is no statistically significant difference between A and B.
As you can see, it's impossible to conclude that there's a difference between the conversion rates of A and B because the observed difference is 0.35%, the p-value is greater than α, and the confidence interval is between -0.4% and +1.1%. Note that there is a chance equal to β or 20% that the result is a false negative.
Member discussion