top | item 17912427

Google Optimize now free for everyone

164 points| gmays | 7 years ago |blog.google | reply

50 comments

order
[+] rightbyte|7 years ago|reply
"Optimizing" UI for session length and clicks is the same thing as finding the local maxima of bad, right?
[+] uberstuber|7 years ago|reply
The effects of this are easiest to see with online recipes; the highest ranking recipes are all thousand word ramblings with a recipe tacked on at the end. Google sees you spent more time on the site (i.e. wasted scrolling) and thinks you were more 'engaged.'
[+] indiesolver|7 years ago|reply
I guess it is up to you to define your metric to be measured and to be optimized. To define the right metric and targets is not easy and probably is business-specific, i.e., is not the same for every business.
[+] stochastic_monk|7 years ago|reply
My user experience has continually depreciated on google and most websites the last few years.
[+] John_KZ|7 years ago|reply
It depends on your goal. If you want to show more ads for longer, it's the local maxima of profit. If you want to provide something like a quick reference guide, then yes, it's terrible. Instead of optimizing for some arbitrary metric you should focus on doing whatever you're doing right.

Behavioral analytics on user interaction can help you fix a bad design but shouldn't matter that much.

[+] jakob223|7 years ago|reply
Not necessarily - if I have a website that is trying to offer information to people about something, then it's better for everyone if it's more navigable/undersatandable, which session length / clicks might be a proxy for.

That said, it certainly can be.

[+] maxpupmax|7 years ago|reply
This is a freemium product. There's an enterprise upgrade that costs about 10k per year I believe. It's nice though, I use it on my site.
[+] AstralStorm|7 years ago|reply
Additionally a freemium product where Google can steal all your ideas because you just gave them the license.

(not necessarily true, but it has happened before with YouTube)

[+] GunlogAlm|7 years ago|reply
'Optimize' has been around for quite some time, just in case anybody thinks this is new. In fact, I believe this feature has been around for over a year, now.
[+] curo|7 years ago|reply
Does anyone have a rule of thumb on when A/B testing becomes important for startups?

We have a few thousand visitors a month and are starting to convert, but my guess is A/B testing language and buttons would be premature optimization for us. Just curious at what point that's no longer the case.

[+] 505aaron|7 years ago|reply
The last heavily traffic site I worked on wouldn't perform an A/B test unless they could experiment with tens of thousands of daily active users. The experiments would last 2-3 weeks to gain statistical significance. A page usually had a 70/30 control experiment split.

The challenge is gaining statistically significant data. I think it is easier for an early stage customer to talk to their customers versus go through the time of a split test.

[+] citrablue|7 years ago|reply
Some pretty easy rules of thumb, assuming you have a decent grasp on your economics. Look at it as a "low hanging fruit" optimization problem -- do you put resources into running a test (+ opportunity cost for lost sales), or into something else?

Suppose you have 10k monthly sessions with a 0.5% conversion rate (50 conversions). How many more customers would you need in order to prioritize running a test? If 55 conversions in a given month means you crush important KPIs, then that's probably worth testing -- you just need a 10% lift.*

Also keep in mind that running A/B tests (1 control, 1 treatment) is suboptimal. That tests, "does this beat what I have now?" The more important question is "what is my best option?".

OTOH, if other things like messaging and product are stable, you can test a smaller traffic site by leaving it running longer.

My rough estimate is 100 conversion events in the time the test runs. So if I have 100 conversion events in 1 month, it may make sense to run a 2-3 option + 1 control test for 1 month.

(You can also test much larger things than buttons. For startups, I like to suggest trying out positioning or value statements and seeing how visitors respond!)

* however, it'll take a long time for you to reach statistical confidence for a 10% lift in rate, with only 50 conversion events across all tests.

[+] the_bear|7 years ago|reply
There are some calculators that help you figure this out. Here's one I just found with a Google search: https://www.optimizely.com/sample-size-calculator/

The idea is that to get results, you need some combination of a lot of data, or a big impact from the changes your testing. If you just change the color of the signup button, it probably won't have a major impact on the conversion rate, so you'll need a lot more data to reach a conclusion. But if you test a completely new landing page, it might have a better chance of being meaningfully different (better or worse, who knows until you test?) and so you wouldn't need as many visitors to get a result.

[+] roberttod|7 years ago|reply
In order to be useful, you probably want to see AB tests reaching a conclusion in under 30 days. I'd say for a conversion rate goal, this is going to be when you have around 100k visitors a month.

There are a few variables to consider

- What goal would you like to AB test? Conversion rate is an end-of-funnel goal that needs a lot of traffic, you can use upper funnel goals like product views, add to bag etc to get quicker conclusions (not as accurate but often a good approximation)

- The stats engine/AB testing tool you are using. More simple tools might conclude quicker but in my experience they can be so inaccurate they are counter productive. Usually a long time to conclude = reliable results. I've never used Google Optimize so I'm not sure where it stands.

- How many people are being exposed to the AB test, for example is it all web traffic or just mobile?

- How much of an affect the AB test has on behavior. A button color/text change will normally take long to conclude than a feature that's really helping your users.

- How confident do you want to be before reaching a conclusion? I'd recommend looking for 95% confidence in uplift before concluding an AB test.

[+] shostack|7 years ago|reply
If you have an easy way of rolling out tests, and enough data to reach statistical significance, IMHO it's never too early to test.
[+] thedirt0115|7 years ago|reply
At least thinking about it is important NOW, regardless of what stage your startup is. It can be extremely difficult to try to slot a 3rd party A/B testing solution into your product (or really hard to roll your own) if your infra doesn't support it from the start. Also, hire a data scientist! (Disclaimers: I am not a data scientist, I just think everyone needs one! I have worked on an experimentation system at $BIG_COMPANY.)

I'd suggest thinking about the following BEFORE YOU RUN A SINGLE A/B TEST:

1) Key Metrics: Define these. They are the general, "I don't care what your experiment is about, these numbers are important." Every experiment you run should automatically track these metrics. You should also give the ability to define custom metrics, since an experiment that changes some random button color probably wants to look at how many people clicked the button, which is almost definitely NOT a key metric.

2) Logging Infrastructure: Make sure that you have a easy-to-use, reliable data pipeline set up for logging and processing events. Bad logging == bad experiment results. Also consider streaming vs batch processing for updating experiment results.

3) Population Management: How do your experiments segment users? Are variants calculated in realtime? Batched with some SLA for lag? Are they sticky?

4) Mutual Exclusion: People running experiments often want "their" users excluded from other experiments.

5) Guardrails: Do your experiments automatically shut off if there is a catastrophic decline in one or more key metrics? What safety measures do you have around determining if an experiment is safe/valid? How do you handle cleaning up data when there's a problem? What sorts of actions invalidate an experiement's existing results? Does your entire site break if your A/B Testing service is down for whatever reason?

6) Cleanup/Ownership: Experiments don't run forever (at least they shouldn't!). Cleaning up old features, populations, etc can a pain, especially when the people that wrote the stuff originally no longer work at the company. Make cleanup mandatory and as easy as possible.

There's a lot more, but I'm tired now. A/B testing is complex. There are lots of resources out there, though. Look for white papers on the subject, they're surprisingly approachable. Example from Microsoft: https://exp-platform.com/Documents/2017-08%20KDDMetricInterp...

[+] matt4077|7 years ago|reply
"Published Mar 30, 2017"

(I'm not complaining–it's interesting and I'll give it a try. But it's not brand new)

[+] na85|7 years ago|reply
Where is the requirement that we only see "brand-new" stuff on HN?
[+] jwatte|7 years ago|reply
So Google just killed crazyegg and optimizely?
[+] oh-kumudo|7 years ago|reply
How long will this promise be kept, I am curious.