The red flag here for me was that Optimizely encourages you to stop the test as soon as it "reaches significance." You shouldn't do that. What you should do is precalculate a sample size based on the statistical power you need, which involves determining your tolerance for the probability of making an error and on the minimum effect size you need to detect. Then, you run the test to completion and crunch the numbers afterward. This helps prevent the scenario where your page tests 18% better than itself by minimizing probability that your "results" are just a consequence of a streak of positive results in one branch of the test.
I was also disturbed that the effect size was taken into account in the sample size selection. You need to know this before you do any type of statistical test. Otherwise, you are likely to get "positive" results that just don't mean anything.
OTOH, I wasn't too concerned that the test was a one-tailed test. Honestly, in a website A/B test, all I really am concerned about is whether my new page is better than the old page. A one-tailed test tells you that. It might be interesting to run two-tailed tests just so you can get an idea what not to do, but for this use I think a one-tailed test is fine. It's not like you're testing drugs, where finding any effect, either positive or negative, can be valuable.
I should also note that I only really know enough about statistics to not shoot myself in the foot in a big, obvious way. You should get a real stats person to work on this stuff if your livelihood depends on it.
Hi pmiller, Dan from Optimizely here. Thanks for your thoughtful response. This is a really important issue for us, so I wanted to set the record straight on a couple of points:
#1 - “Optimizely encourages you to stop the test as soon as it reaches ‘statistical significance.’” - This actually isn’t true. We recommend you calculate your sample size before you start your test using a statistical significance calculator and waiting until you reach that sample size before stopping your test. We wrote a detailed article about how long to run a test, here: https://help.optimizely.com/hc/en-us/articles/200133789-How-...
#2 - Optimizely uses a one-tailed test, rather than a 2-tailed test. - This is a point the article makes and it came up in our customer community a few weeks ago. One of our statisticians wrote a detailed reply, and here’s the TL;DR:
- Optimizely actually uses two 1-tailed tests, not one.
- There is no mathematical difference between a 2-tailed test at 95% confidence and two 1-tailed tests at 97.5% confidence.
- There is a difference in the way you describe error, and we believe we define error in a way that is most natural within the context of A/B testing.
- You can achieve the same result as a 2-tailed test at 95% confidence in Optimizely by requiring the Chance to Beat Baseline to exceed 97.5%.
- We’re working on some exciting enhancements to our methodologies to make results even easier to interpret and more meaningfully actionable for those with no formal Statistics background. Stay tuned!
Overall I think it’s great that we’re having this conversation in a public forum because it draws attention to the fact that statistics matter in interpreting test results accurately. All too often, I see people running A/B tests without thinking about how to ensure their results are statistically valid.
"Honestly, in a website A/B test, all I really am concerned about is whether my new page is better than the old page. A one-tailed test tells you that."
No, it's the other way around. One tailed test is only usable for testing if the new design worse than the old one, because it being better than the old one does not matter as long it's not worse. If you are testing that is the new design better, you definitely need to test both tails or else you may likely switch to a worse design than the old one.
Why not run a two-tailed test and double the alpha? If I'm understanding it correctly, you'll still make the same conclusion at either tail as a one-tailed test, but this way you have both directions covered. I could be missing something, just thinking out loud.
All users who use SumAll should be wary of their service. We tried them out and we then found out that they used our social media accounts to spam our followers and users with their advertising. We contacted them asking for answers and we never heard from them. Our suggestion: Avoid SumAll.
Hey Antr, Jacob from SumAll here. Sorry to hear you had a bad experience with us. The tweets you're talking about that "spam" your accounts were most likely the performance tweets that you are free to toggle on and off. Here's how you can do that: https://support.sumall.com/customer/portal/articles/1378662-...
This article comes off as a bit boastful and somewhat of an advertisement for the company...
"What threw a wrench into the works was that SumAll isn’t your typical company. We’re a group of incredibly technical people, with many data analysts and statisticians on staff. We have to be, as our company specializes in aggregating and analyzing business data. Flashy, impressive numbers aren’t enough to convince us that the lifts we were seeing were real unless we examined them under the cold, hard light of our key business metrics."
I was expecting some admission of how their business is actually different/unusual, not just "incredibly technical". Secondly, I was expecting to hear that these "technical" people monkeyed with the A/B testing (or simply over-thought it) which got them in to trouble .. but no, just a statement about how "flashy" numbers don't appeal to them.
I think the article would be much better without some of that background.
>We decided to test two identical versions of our homepage against each other... we saw that the new variation, which was identical to the first, saw an 18.1% improvement. Even more troubling was that there was a “100%” probability of this result being accurate.
Wow. Cool explanation of one-tailed, two tailed tests. Somehow I have never run across that. Here's a link with more detail (I think it's the one intended in the article, but a different one was used): http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests...
It seems like I see these articles pop up on a regular basis over at Inbound or GrowthHackers.
I think the problem is two-sided: one on the part of the tester and one on the part of the tools. The tools "statistically significant" winners MUST be taken with a grain of salt.
On the user side, you simply cannot trust the tools. To avoid these pitfalls, I'd recommend a few key things. One, know your conversion rates. If you're new to a site and don't know patterns, run A/A tests, run small A/B tests, dig into your analytics. Before you run a serious A/B test, you'd better know historical conversion rates and recent conversion rates. If you know your variances, it's even better, but you could probably heuristically understand your rate fluctuations just by looking at analytics and doing A/A test. Two, run your tests for long after you get a "winning" result. Three, have the traffic. If you don't have enough traffic, your ability to run A/B tests is greatly reduced and you become more prone to making mistakes because you're probably an ambitious person and want to keep making improvements! The nice thing here is that if you don't have enough traffic to run tests, you're probably better off doing other stuff anyway.
On the tools side (and I speak from using VWO, not Optimizely, so things could be different), but VWO tags are on all my pages. VWO knows what my goals are. Even if I'm not running active tests on pages, why can't they collect data anyway and get a better idea of what my typical conversion rates are? That way, that data can be included and considered before they tell me I have a "winner". Maybe this is nitpicky, but I keep seeing people who are actively involved in A/B testing write articles like this, and I have to think the tools could do a better job in not steering intermediate-level users down the wrong path, let alone novice users.
I just checked in one possible R calculation of two-sided significance under a binomial model under the simple null hypothesis A and B have the same common rate (and that that rate is exactly what was observed, a simplifying assumption) here http://winvector.github.io/rateTest/rateTestExample.html . The long and short is you get slightly different significances under what model you assume, but in all cases you should consider it easy to calculate an exact significance subject to your assumptions. In this case it says differences this large would only be seen in about 1.8% to 2% of the time (a two-sided test). So the result isn't that likely under the null-hypothesis (and then you make a leap of faith that maybe the rates are different). I've written a lot of these topics at the Win-Vector blog http://www.win-vector.com/blog/2014/05/a-clear-picture-of-po... .
They said they ran an A/A test (a very good idea), but the numbers seem slightly implausible under the two tests are identical assumption (which again, doesn't immediately imply the two tests are in fact different).
The important thing to remember is your exact significances/probabilities are a function of the unknown true rates, your data, and your modeling assumptions. The usual advice is to control the undesirable dependence on modeling assumptions by using only "brand name tests." I actually prefer using ad-hoc tests, but discussion what is assumed in them (one-sided/two-sided, pooled data for null, and so on). You definitely can't assume away a thumb on the scale.
Also this calculation is not compensating for any multiple trial or early stopping effect. It (rightly or wrongly) assumes this is the only experiment run and it was stopped without looking at the rates.
This may look like a lot of code, but the code doesn't change over different data.
I would be curious to know what percentage of teams with statisticians / data people actually use tools like Optimizely? A lot of people seem to be building their own frameworks that use a lot of different algorithms (two-armed bandits, etc.). From my understanding, Optimizely is really aimed at marketers without much statistical knowledge.
Of course, if you're a startup, building an A/B testing tool is your last priority, so you would use an existing solution.
Are there much more advanced 'out-of-the-box' tools for testing out there besides the usual suspects, i.e. Optimizely, Monetate, VWO, etc.?
This title used to read "How Optimizely (Almost) Got Me Fired", which is the actual title of the article.
It seems a mod (?) changed it to "Winning A/B results were not translating into improved user acquisition".
I've seen a descriptive title left by the submitter change back to the less descriptive original by a mod. But I'm curious why a mod would editorialize certain titles and change them away from their original, but undo the editorializing of others and change them to the less descriptive originals.
I feel that the second title is better, as it talks about the kind of testing they are using, instead of being a click bait of "HOW DID IT GET YOU FIRED?".
> The kicker with one-tailed tests is that they only measure – to continue with the example above – whether the new drug is better than the old one. They don’t measure whether the new drug is the same as the old drug, or if the old drug is actually better than the new one. They only look for indications that the new drug is better...
I don't understand this paragraph. They only look for indications that the drug is better... than what?
Do any of these tools show you a distribution of variable your trying to optimize? I am just thinking that some product features might be polarizing but if you measure, the mean it might give you different results than expected. I am thinking that's where the two-tailed comes in.
Perhaps the most troubling element is that optimizely seems comfortable claiming 100% certainty in anything. That requires (in Bayesian terminology) infinite evidence, or equivalently (in frequentist terminology) if they have finite data, an infinite gap between mean performances.
this is all fine and good, but if you're goal is to see what works best between X new versions of a page and you are rigorous in creating variants, Optimizely is a great tool for figuring out the best converting variant.
Except, apparently, they aren't actually that good at _that_. If an A/A test to not yield 100% chance of 18% uplift, what gives you any degree of certainty that other tests won't have equally skewed results?
In my experience Optimizely does everything they can to mislead their users into overestimating their gains.
Optimizely is best suited at creating exciting graphs and numbers that will impress the management, which I guess is a more lucrative business than providing real insight.
The headline isn't really what this article is about, particularly the disparaging of Optimizely. Might I suggest "The dangers of naive A/B testing" or "Buyer beware -- A/B methodologies dissected" or "Don't Blindly Trust A/B Test Results".
pmiller2|11 years ago
I was also disturbed that the effect size was taken into account in the sample size selection. You need to know this before you do any type of statistical test. Otherwise, you are likely to get "positive" results that just don't mean anything.
OTOH, I wasn't too concerned that the test was a one-tailed test. Honestly, in a website A/B test, all I really am concerned about is whether my new page is better than the old page. A one-tailed test tells you that. It might be interesting to run two-tailed tests just so you can get an idea what not to do, but for this use I think a one-tailed test is fine. It's not like you're testing drugs, where finding any effect, either positive or negative, can be valuable.
I should also note that I only really know enough about statistics to not shoot myself in the foot in a big, obvious way. You should get a real stats person to work on this stuff if your livelihood depends on it.
dsiroker|11 years ago
#1 - “Optimizely encourages you to stop the test as soon as it reaches ‘statistical significance.’” - This actually isn’t true. We recommend you calculate your sample size before you start your test using a statistical significance calculator and waiting until you reach that sample size before stopping your test. We wrote a detailed article about how long to run a test, here: https://help.optimizely.com/hc/en-us/articles/200133789-How-...
We also have a sample size calculator you can use, here: https://www.optimizely.com/resources/sample-size-calculator
#2 - Optimizely uses a one-tailed test, rather than a 2-tailed test. - This is a point the article makes and it came up in our customer community a few weeks ago. One of our statisticians wrote a detailed reply, and here’s the TL;DR:
- Optimizely actually uses two 1-tailed tests, not one.
- There is no mathematical difference between a 2-tailed test at 95% confidence and two 1-tailed tests at 97.5% confidence.
- There is a difference in the way you describe error, and we believe we define error in a way that is most natural within the context of A/B testing.
- You can achieve the same result as a 2-tailed test at 95% confidence in Optimizely by requiring the Chance to Beat Baseline to exceed 97.5%.
- We’re working on some exciting enhancements to our methodologies to make results even easier to interpret and more meaningfully actionable for those with no formal Statistics background. Stay tuned!
Here’s the full response if you’re interested in reading more: http://community.optimizely.com/t5/Strategy-Culture/Let-s-ta...
Overall I think it’s great that we’re having this conversation in a public forum because it draws attention to the fact that statistics matter in interpreting test results accurately. All too often, I see people running A/B tests without thinking about how to ensure their results are statistically valid.
Dan
pocp2|11 years ago
wmt|11 years ago
No, it's the other way around. One tailed test is only usable for testing if the new design worse than the old one, because it being better than the old one does not matter as long it's not worse. If you are testing that is the new design better, you definitely need to test both tails or else you may likely switch to a worse design than the old one.
aggie|11 years ago
unknown|11 years ago
[deleted]
antr|11 years ago
All users who use SumAll should be wary of their service. We tried them out and we then found out that they used our social media accounts to spam our followers and users with their advertising. We contacted them asking for answers and we never heard from them. Our suggestion: Avoid SumAll.
johnathanson|11 years ago
[deleted]
JacobSumAll|11 years ago
Best, Jacob
josefresco|11 years ago
"What threw a wrench into the works was that SumAll isn’t your typical company. We’re a group of incredibly technical people, with many data analysts and statisticians on staff. We have to be, as our company specializes in aggregating and analyzing business data. Flashy, impressive numbers aren’t enough to convince us that the lifts we were seeing were real unless we examined them under the cold, hard light of our key business metrics."
I was expecting some admission of how their business is actually different/unusual, not just "incredibly technical". Secondly, I was expecting to hear that these "technical" people monkeyed with the A/B testing (or simply over-thought it) which got them in to trouble .. but no, just a statement about how "flashy" numbers don't appeal to them.
I think the article would be much better without some of that background.
falsestprophet|11 years ago
jere|11 years ago
Wow. Cool explanation of one-tailed, two tailed tests. Somehow I have never run across that. Here's a link with more detail (I think it's the one intended in the article, but a different one was used): http://www.ats.ucla.edu/stat/mult_pkg/faq/general/tail_tests...
raverbashing|11 years ago
Here's the thing, stop A/Bing every little thing (and/or "just because") and you'll get more significant results.
Do you think the true success of something is due to A/B testing? A/B testing is optimizing, not archtecting.
seanflyon|11 years ago
ssharp|11 years ago
I think the problem is two-sided: one on the part of the tester and one on the part of the tools. The tools "statistically significant" winners MUST be taken with a grain of salt.
On the user side, you simply cannot trust the tools. To avoid these pitfalls, I'd recommend a few key things. One, know your conversion rates. If you're new to a site and don't know patterns, run A/A tests, run small A/B tests, dig into your analytics. Before you run a serious A/B test, you'd better know historical conversion rates and recent conversion rates. If you know your variances, it's even better, but you could probably heuristically understand your rate fluctuations just by looking at analytics and doing A/A test. Two, run your tests for long after you get a "winning" result. Three, have the traffic. If you don't have enough traffic, your ability to run A/B tests is greatly reduced and you become more prone to making mistakes because you're probably an ambitious person and want to keep making improvements! The nice thing here is that if you don't have enough traffic to run tests, you're probably better off doing other stuff anyway.
On the tools side (and I speak from using VWO, not Optimizely, so things could be different), but VWO tags are on all my pages. VWO knows what my goals are. Even if I'm not running active tests on pages, why can't they collect data anyway and get a better idea of what my typical conversion rates are? That way, that data can be included and considered before they tell me I have a "winner". Maybe this is nitpicky, but I keep seeing people who are actively involved in A/B testing write articles like this, and I have to think the tools could do a better job in not steering intermediate-level users down the wrong path, let alone novice users.
pocp2|11 years ago
Optimizely actually has a decent article on it: https://help.optimizely.com/hc/en-us/articles/200040355-Run-...
jmount|11 years ago
They said they ran an A/A test (a very good idea), but the numbers seem slightly implausible under the two tests are identical assumption (which again, doesn't immediately imply the two tests are in fact different).
The important thing to remember is your exact significances/probabilities are a function of the unknown true rates, your data, and your modeling assumptions. The usual advice is to control the undesirable dependence on modeling assumptions by using only "brand name tests." I actually prefer using ad-hoc tests, but discussion what is assumed in them (one-sided/two-sided, pooled data for null, and so on). You definitely can't assume away a thumb on the scale.
Also this calculation is not compensating for any multiple trial or early stopping effect. It (rightly or wrongly) assumes this is the only experiment run and it was stopped without looking at the rates.
This may look like a lot of code, but the code doesn't change over different data.
davnola|11 years ago
thoughtpalette|11 years ago
hvass|11 years ago
Of course, if you're a startup, building an A/B testing tool is your last priority, so you would use an existing solution.
Are there much more advanced 'out-of-the-box' tools for testing out there besides the usual suspects, i.e. Optimizely, Monetate, VWO, etc.?
kareemm|11 years ago
It seems a mod (?) changed it to "Winning A/B results were not translating into improved user acquisition".
I've seen a descriptive title left by the submitter change back to the less descriptive original by a mod. But I'm curious why a mod would editorialize certain titles and change them away from their original, but undo the editorializing of others and change them to the less descriptive originals.
dshacker|11 years ago
unknown|11 years ago
[deleted]
ricardobeat|11 years ago
tieTYT|11 years ago
I don't understand this paragraph. They only look for indications that the drug is better... than what?
dk8996|11 years ago
hawkice|11 years ago
dmourati|11 years ago
"They make it easy to catch the A/B testing bug..."
rrrx3|11 years ago
dsugarman|11 years ago
pdpi|11 years ago
fvdessen|11 years ago
Optimizely is best suited at creating exciting graphs and numbers that will impress the management, which I guess is a more lucrative business than providing real insight.
claar|11 years ago
michaelhoffman|11 years ago
markolschesky|11 years ago
pokemon123|11 years ago
[deleted]