A/B Testing of LinkedIn Ads

Below you'll find the R code for a quick and dirty A/B testing.

 

Attention: A/B testing gets advanced quickly, if you really want to dig deep, the best course I know is DataCamp's A/B testing in R (link).

Question #1: Are my results statistically significant?

Question #2: Is my sample size big enough?

To run this test on your own, you need R and the library(pwr) installed.

Situation:

Ad "A" = 600 clicks with 32 conversions

Ad "B" = 400 clicks with 44 conversions

Consequently, we add the clicks and conversions in R:

1.png

As you can see, the results are statistically significant at a p-value of 0.001418. Generally, a p-value below 0.05 is desired.

Assumption: we assume the distribution is identical and normally distributed (ad A vs. ad B).

Next, we want to know how many samples we need:

If you're unsure, just enter the total clicks in n1 = 1000 and leave the rest as is.

2.png

For ad "B", we have 400 samples (clicks). Thus, we can reasonably conclude that we have enough samples.

Conclusion:
As we have enough samples, the hypothesis test can be regarded as valid. In other words, ad "A" is statistically significantly better than ad "B.".

I.e., you can dump ad "B", statistically speaking :-)

Good luck!

Franco

PS: Below is the code if you want to copy it:

library(pwr)
A_clicks <- 600 # Number of clicks for ad A
A_conversions <- 32 # Number of conversions for ad A
B_clicks <- 400 # Number of clicks for ad B
B_conversions <- 44 # Number of conversion for ad B

prop.test(c(A_conversions, B_conversions), c(A_clicks, B_clicks))

sample_size <- pwr.2p2n.test(
  h = 0.20, # 0.20 for small differences, 0.50 for medium, and 0.80 for large effects.
  n1 = 1000, # Total number of samples
  sig.level = 0.05, # Significance level (type I error probability, typically 5%)
  power = 0.80, # Power of the test (1 - Type II error probability)
  alternative = c("two.sided")) # Default two.sided, alternatives "greater" or "less"
sample_size