(no title)
antonfire | 4 years ago
I'm not talking about measuring clicks on the ad. I'm talking about either surveying people or about measuring their post-ad behavior, e.g. whether they buy a product. Yes, this generally misses offline behavior, but it captures a lot more online behavior than whether or not you click the ad. People measure clicks on the ad too, but that's not what I'm describing.
What this requires is accurately tracking a user across the internet, i.e. being able to identify a user who is part of your experiment as the same user later buying a product (or visiting a website, or answering a survey). Which is an imperfect mechanism. But it works well enough to run this kind of experiment.
Ad blockers don't really mess this up. The experiment takes the existence of ad-blockers into account. E.g. if everyone used ad blockers, this kind of experiment wouldn't show positive results (except by random variation).
And you're right, you do need a lot of data to get statistically significant results when people don't buy the product that often (when "conversion rates are low", in the lingo), which is a challenge with measuring "conversions". It's a lot easier to measure those for, say, mobile games than it is for cars. If you're a car manufacturer, then measuring car buying this way isn't going to work.
When you do this in practice, it turns out sometimes the results are significant and sometimes they aren't. Probably because some ads work and some ads don't.
> It tells you which style of ad works best, without telling you how much better it works than simply not bothering.
The style of experiment I described is a "holdback" experiment. It compares showing people the ad vs simply not bothering showing (some subset of) people that ad. People in control group B are treated as though the ad under the experiment never existed in the first place. (Which typically means showing them some other ad in its place, because that's what would be done to users if the ad campaign under the experiment wasn't being run.)
>But what about everyone else (group B)? That's the rest of the population of the planet.
This isn't how A-B tests work. Groups A and B aren't "people who see your website with change A" and "everyone else, including people who never interact with anything you showed them at all". A good experiment design means a good control group that you can measure something about. These experiments aren't stupid. (Well, sometimes they are. You have to set it up well.)
> And it doesn't tell you how many people were sufficiently annoyed by the ad to vow never to buy that brand.
Well sure, but it can tell you if your ad results in people answering survey questions about your brand more negatively, which might help you notice that your ad is annoying and counterproductive.
Anyway, long story short, internet advertising is a whole lot more measurable than you were originally suggesting with "But how could one prove such a thing? It would involve peering into people's minds."
Yes, there are limitations. Yes, a lot of statistics about marketing "working" is bullshit. But some of it isn't.
No comments yet.