(no title)
deugtniet | 1 year ago
On top of this, logistic regression makes your units a lot more interpretable than just looking at differences in means. I.E. The odds of buying something are 1.1 when you are assigned in group B.
deugtniet | 1 year ago
On top of this, logistic regression makes your units a lot more interpretable than just looking at differences in means. I.E. The odds of buying something are 1.1 when you are assigned in group B.
crystal_revenge|1 year ago
Correct A/B testing should involved starting with an A/A test to validate the setup, building a basic causal model of what you expect the treatment impact to be, controlling of covariates, and finally ensuring that when the causal factor is controlled for the results change as expected.
But even the "experts" I've read in this area largely focus on statistical details that honestly don't matter (and if they do the change you're proposing is so small that you shouldn't be wasting time on it).
In practice if you need "statistical significance" to determine if change has made an impact on your users you're already focused on problems that are too small to be worth your time.
lifeisstillgood|1 year ago
I think the dumb underlying question I have is - how does one do experimental design
Edit: and if you aren’t seeing giant obvious improvements, try improving something else (I get the idea that my B is going to be so obvious that there is no need to worry about stats - if it’s not that’s a signal to chnage something else?
e10v_me|1 year ago
Yes, one can analyze A/B tests in a regression framework. In fact, CUPED is an equivalent to the linear regression with a single covariate.
Would it be better? It depends on the definition of "better". There are several factors to consider. Scientific rigor is one of them. So is the computational efficiency.
A/B tests are usually conducted at scale of thousands of randomization units (actually it's more like tens or hundreds of thousands). There are two consequences:
1. Computational efficiency is very important, especially if we take into account the number of experiments and the number of metrics. And pulling granular data into a Python environment and fitting a regression is much less efficient than calculating aggregated statistics like mean and variance.
2. I didn't check, but I'm pretty sure that, at such scale, logistic and linear regressions' results will be very close, if not equal.
And even if, for some reason, there is a real need to analyze a test using logistic model, multi-level model, or a clustered error, in tea-tasting, it's possible via custom metrics: https://tea-tasting.e10v.me/custom-metrics/
crystal_revenge|1 year ago
This is not true. You almost never need to perform logistic regression on individual observations. Consider that estimating a single Bernoulli rv on N observations is the same as estimate a single Binomial rv for k/N. Most common statistical software (e.g. statsmodels) will support this grouped format.
If all of our covariates a discrete categories (which is typically the case for A/B tests) then you only need to regression on the number of examples equal to the number of unique configurations of the variables.
That is if you're running an A/B test on 10 million users across 50 states and 2 variants you only need 100 observations for your final model.