top | item 41100696

(no title)

deugtniet | 1 year ago

I guess I'm not very versed in website A/B testing, but wouldn't it be much better to analyze these results in a regression framework where you can correct for the covariates?

On top of this, logistic regression makes your units a lot more interpretable than just looking at differences in means. I.E. The odds of buying something are 1.1 when you are assigned in group B.

discuss

order

crystal_revenge|1 year ago

This is the correct approach, but having done A/B testing for many years (and basically moved away from this area of work), nobody in the industry really cares about understanding the problem they care about prompting themselves as experts and creating the illusion of rigorous marketting.

Correct A/B testing should involved starting with an A/A test to validate the setup, building a basic causal model of what you expect the treatment impact to be, controlling of covariates, and finally ensuring that when the causal factor is controlled for the results change as expected.

But even the "experts" I've read in this area largely focus on statistical details that honestly don't matter (and if they do the change you're proposing is so small that you shouldn't be wasting time on it).

In practice if you need "statistical significance" to determine if change has made an impact on your users you're already focused on problems that are too small to be worth your time.

lifeisstillgood|1 year ago

Ok so, that’s interesting. I like examples so are you saying I should build a “framework” that presents two (landing) pages exactly the same, and (hopefully) is able to collect things like what source the visitor came from, maybe some demographics. And I then try to get 100 impressions with random blue and red buttons, then check to see if there is some confounding factor (blue was always picked by females linking from google ads) and then remove the random next time and show blue ads to half females from google and half anyone else

I think the dumb underlying question I have is - how does one do experimental design

Edit: and if you aren’t seeing giant obvious improvements, try improving something else (I get the idea that my B is going to be so obvious that there is no need to worry about stats - if it’s not that’s a signal to chnage something else?

e10v_me|1 year ago

Thank you for the interest and for the suggestion.

Yes, one can analyze A/B tests in a regression framework. In fact, CUPED is an equivalent to the linear regression with a single covariate.

Would it be better? It depends on the definition of "better". There are several factors to consider. Scientific rigor is one of them. So is the computational efficiency.

A/B tests are usually conducted at scale of thousands of randomization units (actually it's more like tens or hundreds of thousands). There are two consequences:

1. Computational efficiency is very important, especially if we take into account the number of experiments and the number of metrics. And pulling granular data into a Python environment and fitting a regression is much less efficient than calculating aggregated statistics like mean and variance.

2. I didn't check, but I'm pretty sure that, at such scale, logistic and linear regressions' results will be very close, if not equal.

And even if, for some reason, there is a real need to analyze a test using logistic model, multi-level model, or a clustered error, in tea-tasting, it's possible via custom metrics: https://tea-tasting.e10v.me/custom-metrics/

crystal_revenge|1 year ago

> And pulling granular data into a Python environment and fitting a regression is much less efficient than calculating aggregated statistics like mean and variance.

This is not true. You almost never need to perform logistic regression on individual observations. Consider that estimating a single Bernoulli rv on N observations is the same as estimate a single Binomial rv for k/N. Most common statistical software (e.g. statsmodels) will support this grouped format.

If all of our covariates a discrete categories (which is typically the case for A/B tests) then you only need to regression on the number of examples equal to the number of unique configurations of the variables.

That is if you're running an A/B test on 10 million users across 50 states and 2 variants you only need 100 observations for your final model.