This is breaking some links. You might believe it "shouldn't", and a server "should" ignore the added params, but the reality is it's breaking them. This past weekend, I posted a link to an image on Facebook, and FB generated the preview fine, but created a link that 404's.
I guess if FB really wants, they could make a second fetch to ensure that their added params don't break the third party server. Or could they add a whitelist of domains that use their first-party tracking?
I really don't like the end result right now, looking like "the web works" from inside FB, but not when you try to follow this link out of it. I don't believe at all that that is FB's intent here, but it's just one more time that some silo breaks another part of the ecosystem, and to an untrained eye it looks like the third party is the culprit.
I had a quick look at the protocol but couldn't see anything on handling that case
Regardless of the protocol - it's not unreasonable to return a 422 or 403 by default for that (malformed) request when first seen - as it would indicate that something sketchy may be going on - which can be whitelisted later
There are a number of comments here who seem genuinely happy about this. This is a perspective that is hard for me to understand, largely because I'm strongly in the pro-privacy, anti-tracking ideology.
So if you are part of the group who sees this as a good thing, I'm genuinely interested to understand why you see this as a good thing and whether you view the mass surveillance of the general public by advertising companies as bad?
Targeted ads have helped small businesses and indie brands thrive.
Previously, only large brands and national/multi-national corporations could afford to advertise at scale and reach customers through TV/Radio/Newspapers (that too with a high minimum spend).
Now your local mom and pop bakery could have a spend as low as $100 a month to reach their customers and help drive their business.
The world is not black and white, and neither is the morality of advertising.
Engineer here: I used to be completely anti-tracking.
Then, I started needing analytics for my own business. Without analytics, I wouldn't be able to sell with efficiency, and therefore, I wouldn't have a business. Granted, the anti-consumerist in me thinks maybe as a society we shouldn't be so concerned with our efficiency to sell. But, we live in a capitalist world, and I don't see that changing any time soon.
The way I see it now, I'm less concerned about tracking than I am about how big some businesses are -- especially in this space.
At every start up I know, they use analytics, and no one is doing anything spooky. But, I'm sure there's plenty of spooky stuff going on at the FAANGAMUs.
My guess that it's the Facebook employees, advertisers and marketing people who like this.
There's a flawed belief that it's necessary. UX on the other hand suggests otherwise. AdTech is not concerned with UX though and tries to wrap targeting in some kind of pseudo user benefit—spin.
Good products and services sell even without tracking. Advertising is an economic powerhouse though and will always push for anti-UX trends because it fundamentally runs polar opposite to the user experience.
I understand your stance from an end users perspective completely.
However, it's not hard to reason why people whose livelihoods depend on being able to track users and increase the value of their ad inventory would be happy about this.
People are unusually good at separating their personal interests from consumer interests. I've observed this emotion arise in many entrepreneurs first hand, be they in the brick retail, or conventional energy or obsolete auto parts, it's common for people to be happy about events that benefit their livelihood even when it has a negative impact on humanity or that ecosystem.
Basically, FB is expanding its tracking, allowing 1st party vs. their third party cookie tracking. I suspect the click-id query string is part of that rollout. This helps it get around things like Apple's new ITP (Intelligent Tracking Prevention). 2.0 in Safari.
This is actually fantastic news for advertisers that have their own data warehouses and need to create a better 1-to-1 click tracking to internal user data. This allows much better attribution and testing of incrementality so businesses can tell where their value is truly coming from.
I’m pretty excited to see this roll out more broadly.
UTM parameters tag campaigns at the aggregate level, to be used for reporting. The fbclid is almost-certainly unique to the click. While you could make, e.g., utm content unique per-click that's not what it's for. Anything in a UTM parameter is intended to be human-readable, and will almost certainly appear as-is in a report somewhere. Click ID parameters are internal IDs used to join data sources, which is not the type of data that should go into UTM parameters.
Note that Google Analytics, the tool that invented UTM parameters, itself does not use UTM parameters when it does this sort of thing. Google Analytics uses the gclid (AdWords) or dclid (DoubleClick) to join against user or click level data from other tools.
UTM are used for a website's own anayltics while fbclid will link users actions on a website to advertising done on facebook and report on it within fb. UTMs just track where a user came from, fbclid will track who a user is and match it back to fb ads.
First time I read about UTM. It looks like those parameters are used to track ad campaigns. They provide metrics over which campaign works best.
The "fbclid" parameters on the other hand seems intended to track individual clicks. That is, Facebook wants to keep tracking individuals when they follow links to off-site pages.
So they're modifying URL? Facebook is breaking things. But sure, they've run the numbers and decided they don't care.
Browsers will now have to resort to removing query parameters to prevent tracking. And websites should really use click-to-enable sharing buttons to prevent Facebook from snooping on everything.
My guesstimate is that the number of URLs that are shared on Facebook AND that already have a completely orthogonal "fbclid" parameter is infinitesimal.
Maybe among the URLs shared on Facebook there are a few whose servers only respond to a fixed amount of parameters, changing their behaviour when additional unused parameters are appended to the query string, but I imagine that the number of such cases is so low it's not even worth considering.
What exactly is Facebook breaking, in your opinion?
Would Facebook also break things if they were instead making an async request to the destination and appending a custom header to it, something like "X-Coming-From-Facebook"?
I understand being upset about the tracking aspect, but attaching query params to a link isn't breaking anything. Of all the ways Facebook could have implemented something like this, I actually prefer it this way. Query params are easy for me and other adblockers to strip off. Imagine if they were messing with request headers or something that was harder to notice or change.
An incredibly small number of sites might already be using `fbclid` internally, and an even smaller number won't be able to update their sites.
I am totally on board the don't-break-the-web train, but this just doesn't seem like a problem to me. Maybe once stats come out we'll see that it's a bigger issue, but... I kinda doubt it.
Author failed to do any research, instead of going for the typical "FB is doing something secretive and cryptic" angle. Related links that explain this:
This hn thread is a perfect example of a news bubble. Googling "fbclid" returns the answer in the first result, but hn votes up an article that has no information and treats it as some secret tracking that fb has implemented. HN is excessively biased against any discussion of tracking/analytics on the internet. The community allows no room for true discussion - only blatantly biased opinions.
This article was posted days before any of the links you've given.
According to the metadata for the site, it was originally published on 2018-10-14 and last updated 2018-10-16.
Facebook's own article about the feature came out 5 days after this article was published. So, at the time, Facebook _was_ being secretive about it. Aside from that one line, the entire article reads more like "this is new, I wonder what it does".
Lastly, when I googled "fbclid" the top 3 articles are completely unrelated to Facebook (but, then, I'm not in marketing so this doesn't surprise me) and the forth is this very article.
Not really seeing the comments here as jumping to the conclusion that this is a totally secretive and nefarious practice. I think most people here are used to links being instrumented with tracking of this sort.
Your first two links don't contain any technical information about the "fbclid" parameter. They can be read to understand why Facebook does it, but not how. I see how it is useful to get this info when I'm paying Facebook for views. But that's something different from understanding how it works. So articles like this are needed to piece together what is going on.
Now the interesting question will be whether "fbclid" can be tied to individuals. And I couldn't readily find this info in the links you posted. Maybe I'm bad at reading?
When Google crawls your site, it doesn't know the difference between two urls with the same path, but with different GET params.
Theres no way for them to know whether or not the extra params on the URL change the result page.
(i.e. example.com/index.php?post_id=1 and example.com/index.php?comment_id=1 could be very different pages, or they could be the same; you don't know).
So in comes the canonical url!
This tells Google the proper url required for a specific page.
That way if Google gets to a page using two different urls, it can tell that they are the same page.
You can list it by adding a tag to your HTML head.
You can even do face things like rewrite urls entirely (i.e. If the crawler hits example.com/?category_id=1&item_id=2, you can correct the ugly url by listing the canonical url as example.com/category/1/item/2)
Google is pretty good at automatically determining which qquery parameters actually modify the response content (pagenb=, id=, q=, etc.) and which do not really (sortby=, highlight=, utm_source=, gclid=) so that should not be a problem.
[+] [-] patrickyeon|7 years ago|reply
My link: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg
Facebook made: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg?fbclid=IwAR2...
I guess if FB really wants, they could make a second fetch to ensure that their added params don't break the third party server. Or could they add a whitelist of domains that use their first-party tracking?
I really don't like the end result right now, looking like "the web works" from inside FB, but not when you try to follow this link out of it. I don't believe at all that that is FB's intent here, but it's just one more time that some silo breaks another part of the ecosystem, and to an untrained eye it looks like the third party is the culprit.
[+] [-] mtkd|7 years ago|reply
Regardless of the protocol - it's not unreasonable to return a 422 or 403 by default for that (malformed) request when first seen - as it would indicate that something sketchy may be going on - which can be whitelisted later
[+] [-] basch|7 years ago|reply
https://www.facebook.com/NewYorkMag
[+] [-] lousken|7 years ago|reply
[+] [-] pipermerriam|7 years ago|reply
So if you are part of the group who sees this as a good thing, I'm genuinely interested to understand why you see this as a good thing and whether you view the mass surveillance of the general public by advertising companies as bad?
[+] [-] askafriend|7 years ago|reply
Previously, only large brands and national/multi-national corporations could afford to advertise at scale and reach customers through TV/Radio/Newspapers (that too with a high minimum spend).
Now your local mom and pop bakery could have a spend as low as $100 a month to reach their customers and help drive their business.
The world is not black and white, and neither is the morality of advertising.
I hope this perspective was useful to you.
[+] [-] onlyrealcuzzo|7 years ago|reply
Then, I started needing analytics for my own business. Without analytics, I wouldn't be able to sell with efficiency, and therefore, I wouldn't have a business. Granted, the anti-consumerist in me thinks maybe as a society we shouldn't be so concerned with our efficiency to sell. But, we live in a capitalist world, and I don't see that changing any time soon.
The way I see it now, I'm less concerned about tracking than I am about how big some businesses are -- especially in this space.
At every start up I know, they use analytics, and no one is doing anything spooky. But, I'm sure there's plenty of spooky stuff going on at the FAANGAMUs.
[+] [-] teaneedz|7 years ago|reply
There's a flawed belief that it's necessary. UX on the other hand suggests otherwise. AdTech is not concerned with UX though and tries to wrap targeting in some kind of pseudo user benefit—spin.
Good products and services sell even without tracking. Advertising is an economic powerhouse though and will always push for anti-UX trends because it fundamentally runs polar opposite to the user experience.
[+] [-] r_singh|7 years ago|reply
However, it's not hard to reason why people whose livelihoods depend on being able to track users and increase the value of their ad inventory would be happy about this.
People are unusually good at separating their personal interests from consumer interests. I've observed this emotion arise in many entrepreneurs first hand, be they in the brick retail, or conventional energy or obsolete auto parts, it's common for people to be happy about events that benefit their livelihood even when it has a negative impact on humanity or that ecosystem.
[+] [-] mwexler|7 years ago|reply
https://www.inc.com/peter-roesler/facebook-to-allow-for-firs...
https://digiday.com/marketing/wtf-what-are-facebooks-first-p...
Basically, FB is expanding its tracking, allowing 1st party vs. their third party cookie tracking. I suspect the click-id query string is part of that rollout. This helps it get around things like Apple's new ITP (Intelligent Tracking Prevention). 2.0 in Safari.
[+] [-] kposehn|7 years ago|reply
I’m pretty excited to see this roll out more broadly.
[+] [-] teaneedz|7 years ago|reply
FB just doesn't understand the optics they create.
[+] [-] reimertz|7 years ago|reply
[+] [-] berbec|7 years ago|reply
[+] [-] kageneko|7 years ago|reply
0: https://addons.mozilla.org/en-US/firefox/addon/neat-url/
[+] [-] cremp|7 years ago|reply
A lot of malicious links are just base64'ed to another redirect service; to another base64'ed address (continue as long as your head can keep up.)
[+] [-] wolco|7 years ago|reply
[+] [-] kees99|7 years ago|reply
[1] https://en.wikipedia.org/wiki/UTM_parameters
[+] [-] lmkg|7 years ago|reply
UTM parameters tag campaigns at the aggregate level, to be used for reporting. The fbclid is almost-certainly unique to the click. While you could make, e.g., utm content unique per-click that's not what it's for. Anything in a UTM parameter is intended to be human-readable, and will almost certainly appear as-is in a report somewhere. Click ID parameters are internal IDs used to join data sources, which is not the type of data that should go into UTM parameters.
Note that Google Analytics, the tool that invented UTM parameters, itself does not use UTM parameters when it does this sort of thing. Google Analytics uses the gclid (AdWords) or dclid (DoubleClick) to join against user or click level data from other tools.
[+] [-] soared|7 years ago|reply
[+] [-] ryanworl|7 years ago|reply
[+] [-] lolc|7 years ago|reply
The "fbclid" parameters on the other hand seems intended to track individual clicks. That is, Facebook wants to keep tracking individuals when they follow links to off-site pages.
[+] [-] lolc|7 years ago|reply
Browsers will now have to resort to removing query parameters to prevent tracking. And websites should really use click-to-enable sharing buttons to prevent Facebook from snooping on everything.
[+] [-] vntok|7 years ago|reply
Maybe among the URLs shared on Facebook there are a few whose servers only respond to a fixed amount of parameters, changing their behaviour when additional unused parameters are appended to the query string, but I imagine that the number of such cases is so low it's not even worth considering.
What exactly is Facebook breaking, in your opinion?
Would Facebook also break things if they were instead making an async request to the destination and appending a custom header to it, something like "X-Coming-From-Facebook"?
[+] [-] danShumway|7 years ago|reply
An incredibly small number of sites might already be using `fbclid` internally, and an even smaller number won't be able to update their sites.
I am totally on board the don't-break-the-web train, but this just doesn't seem like a problem to me. Maybe once stats come out we'll see that it's a bigger issue, but... I kinda doubt it.
[+] [-] soared|7 years ago|reply
https://www.facebook.com/business/news/facebook-attribution-...
https://marketingland.com/facebook-attribution-now-available...
https://old.reddit.com/r/adops/comments/9pycuk/facebook_atti...
This hn thread is a perfect example of a news bubble. Googling "fbclid" returns the answer in the first result, but hn votes up an article that has no information and treats it as some secret tracking that fb has implemented. HN is excessively biased against any discussion of tracking/analytics on the internet. The community allows no room for true discussion - only blatantly biased opinions.
https://www.reddit.com/r/analytics/comments/9o52yw/parameter...
Edit - reworded to be less aggressive
[+] [-] maemilius|7 years ago|reply
According to the metadata for the site, it was originally published on 2018-10-14 and last updated 2018-10-16.
Facebook's own article about the feature came out 5 days after this article was published. So, at the time, Facebook _was_ being secretive about it. Aside from that one line, the entire article reads more like "this is new, I wonder what it does".
Lastly, when I googled "fbclid" the top 3 articles are completely unrelated to Facebook (but, then, I'm not in marketing so this doesn't surprise me) and the forth is this very article.
[+] [-] radus|7 years ago|reply
[+] [-] lolc|7 years ago|reply
Now the interesting question will be whether "fbclid" can be tied to individuals. And I couldn't readily find this info in the links you posted. Maybe I'm bad at reading?
[+] [-] alehul|7 years ago|reply
[+] [-] Asking4AFriend|7 years ago|reply
Could someone explain or give a reliable article that explains this well?
[+] [-] MuppetMaster42|7 years ago|reply
Theres no way for them to know whether or not the extra params on the URL change the result page. (i.e. example.com/index.php?post_id=1 and example.com/index.php?comment_id=1 could be very different pages, or they could be the same; you don't know).
So in comes the canonical url! This tells Google the proper url required for a specific page. That way if Google gets to a page using two different urls, it can tell that they are the same page.
You can list it by adding a tag to your HTML head.
You can even do face things like rewrite urls entirely (i.e. If the crawler hits example.com/?category_id=1&item_id=2, you can correct the ugly url by listing the canonical url as example.com/category/1/item/2)
https://support.google.com/webmasters/answer/139066?hl=en
[+] [-] techaddict009|7 years ago|reply
[+] [-] vntok|7 years ago|reply
[+] [-] canadianwriter|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]