Correct me if I'm wrong but I think publishers are trying to have the cake and eat it too. They want their content to be discoverable in search engines. They also want readers to pay for it. Given how search engines currently work this is a self-contradictory model.
I imagine search engine could offer publishers an API to send content for indexing without having to publish it. Google and others could even charge publishers for that and then published could ask readers for subscriptions.
I absolutely support such programs. Similarly to adblockers and DRM, they show that the mechanisms used to generate money simply does not work. Technically it is impossible to expose content to a (web crawler) robot but not to a human (inverse CAPTCHA). And technically it is impossible to control pixels on a free user device.
Publishers, if you want to have people pay for your content, make honest paid subscriptions and deal with it that you vanish from the openly accessible web.
What I find most interesting is the very clear but often ignored 'Cloaking Guidelines' by google:
"Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected.
Some examples of cloaking include:
...
- Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor
" [0]
Google is happily showing LinkedIn, FB, pinterest and news sites content. But when I, Joe User, go to the page, I see nothing but some login/register/pay now form. How is this not a violation of the cloaking guidelines? Clearly google is getting different content than what I am!
(Presumably this is how article's extension works... by masquerading as GoogleBot -- again proving that these sites are serving up different content)
It’s hardly having their cake and eating it too. It’s the equivalent to wanting placement at a news vendor. Your API idea isn’t a bad one, but I hardly think that these news organizations are at fault for wanting to be discoverable, while also wanting money in exchange for their hard work.
I find it much easier to tell firefox to block cookies and localstorage for such websites. Usually works when they have a certain free allowance. You can do it from the address bar quite conveniently.
This is a double edge sword. We complain about too many adverts on sites, and we complain about paywalls. Without ads or paywalls, how do big sites pay for themselves? If you want quality reporting, then someone has to pay those reporters a fair wage for their skill set.
As much as I'd like 'information to be free' we do not yet live in a 'Star Trek' world where money no longer has any meaning, and where people produce 'stuff' purely for the benefit of themselves and others.
Until we reach that utopia (or dystopia depending on your view) you can't have your cake and eat it; It's Ads or Paywalls, pick one.
I agree completely in that high quality journalism requires the appropriate wage, and I detest advertising. However, playing devil's advocate, the flip side of pay walls is that you essentially remove/hide quality journalism from being used in the the argument against the poor quality, free to publish 'news' that is often referred to as 'fake news'. Just a thought.
When I hit a paywall, I walk away, or pay up, because that's the ethical thing to do - I do expect no tracking though, when I do pay up. Reasonable ads (fast, 1st party served) would be ok, tracking is really not.
But... bots (google, etc) are allowed to get through.
I understand the reasons, but then, is it really a paywall?
To be honest, I won't bother with cookies and other tricks: logged in, allowed, logged out, walk away.
In the end, I agree: don't bypass paywalls, the whole adblocking argument becomes invalid, if we do. Some publishers are trying to find a new way of income, and bypassing that invalidates the efforts.
(EDIT: treating bots differently is wrong on so many levels. I changed the User Agent of my miniflux RSS reader to "miniflux-legacy (Googlebot for Tumblr)" from "Miniflux (https://miniflux.net)" because otherwise Tumblr shows GDPR cookie consent interstitial for RSS feeds. There really, really shouldn't be a separate version for bots.)
The existing model is to treat the Internet as a magazine rack. It doesn't work that way.
A user might read one article from the WSJ a month. Obviously they won't pay 5 USD for that, they'll either attempt to get around the paywall or not bother.
Giving users a few free articles means that they'll just rotate around sites to get what they want. You need to charge a small amount from hit 1.
Not withstanding that it's not hard to build a paywall that actually functions, just don't send the content unless you've paid. This addon relies entirely on the fact that content which has not been paid for gets sent anyway.
Realistically though, the answer is that paid journalism disappears, or the Internet as we know it disappears. Increasingly lately it's looking like both will happen.
It's Ads or Paywalls, but it's not Paywalls or Bypass, you can choose both. Since micropayments are not here yet, and most of us can't afford subscriptions to every single site, an approach is to just choose a few you can afford, and bypass the paywalls of the rest.
I know this is unacceptable to those who have a moral opposition to accessing content without accepting the rules, but to those who are just concerned about funding journalism, this approach is essentially equivalent to not bypassing paywalls.
Also, there's a problem with paywalls, which is the enhanced tracking (by subscribing, you're giving them an excellent profile to attach to their analytics). Some people might therefore prefer to use a Bypasser even if they are subscribed.
I don't understand the point of bypassing, the creators need to make money, if you can afford it, subscribe, if not, use another website. it's that simple
I subscribe to a couple of sites. I won't be subscribing for an extra monthly fee I can't afford just to read one or two articles linked on HN. Therefore I have two options: don't read the article, or bypass the paywall.
In both options, the outcome for the site is exactly the same. So, not being a masochist, why shouldn't I choose the outcome that benefits me without harming anyone else?
Technically, the extension is clearing cookies for the site, and setting the "Referer" header to either google.com or facebook.com. Is it illegal to do that?
Cookie clearing is almost certainly not a legal issue; there'd have to be an explicit legal agreement between the setter and the keeper to be an issue in the first place.
The "Referer" header is an interesting question. Hypothetically, if there's be some kind of renumeration for the site (eg: "I grant access to all of your users to my site and in exchange, for every referred visitor, you pay me 0.01 cent"), then spoofing the Referer would incur a financial cost to Google/Facebook. The incurred cost would be negligible for the individual user, but a case could be made against the extension publisher for the aggregated costs.
Blocking ads being about privacy always seemed like hypocrisy and extensions like these only prove it.
Of course, the ads revenue model for the Internet only happened because people basically want free labor, being unwilling to pay for the content they consume, screw the publishers the world will survive.
It's sad because what we'll get is legislature and/or more DRM.
This extension proves nothing; it's perfectly sensible that someone worried about the privacy of their reading doesn't want to make it worse by tying a payment ID to their profile. Subscriptions are in that sense worse than ads.
Depends on the website usually, one that I managed to "hack" is a newspaper in my country (the biggest one actually) that only hides the article text through a CSS class; You just remove that from the HTML through dev tools and have access to the whole thing.
I hope in most cases it is more complicated than that
Probably using a trick to go there via Facebook, there was a trick like that somewhere. Or imitate a search engine, because the news outlet also wants a high Google ranking. Or use a copy from archive.is.
In Germany, this is defined in § 95a UrhG [0], as in "bypassing safeguard measures to gain access to copyright material". "Anti-Anti-Adblocks", as in Adblock filters bypassing adblock popups, were already declared illegal in the BILD case [1].
The effectiveness of the access control is usually a factor.
If there is just a banner hovering over the actual text, and the extension merely removes that banner, than one could question whether there even was an access control in the first place.
As an extreme example, a Finnish court ruled that CSS (as used by DVDs a long time ago) was ineffective.
[+] [-] grn|7 years ago|reply
I imagine search engine could offer publishers an API to send content for indexing without having to publish it. Google and others could even charge publishers for that and then published could ask readers for subscriptions.
[+] [-] ktpsns|7 years ago|reply
Publishers, if you want to have people pay for your content, make honest paid subscriptions and deal with it that you vanish from the openly accessible web.
[+] [-] cmroanirgo|7 years ago|reply
"Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected.
Some examples of cloaking include:
...
- Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor
" [0]
Google is happily showing LinkedIn, FB, pinterest and news sites content. But when I, Joe User, go to the page, I see nothing but some login/register/pay now form. How is this not a violation of the cloaking guidelines? Clearly google is getting different content than what I am!
(Presumably this is how article's extension works... by masquerading as GoogleBot -- again proving that these sites are serving up different content)
[0] https://support.google.com/webmasters/answer/66355?hl=en
edit: formatting
[+] [-] hacknat|7 years ago|reply
[+] [-] amaccuish|7 years ago|reply
[+] [-] nsomaru|7 years ago|reply
[+] [-] ericholscher|7 years ago|reply
[+] [-] contravariant|7 years ago|reply
[+] [-] tertius|7 years ago|reply
Do you have an example?
[+] [-] Jaruzel|7 years ago|reply
As much as I'd like 'information to be free' we do not yet live in a 'Star Trek' world where money no longer has any meaning, and where people produce 'stuff' purely for the benefit of themselves and others.
Until we reach that utopia (or dystopia depending on your view) you can't have your cake and eat it; It's Ads or Paywalls, pick one.
[+] [-] nellaby|7 years ago|reply
[+] [-] pmlnr|7 years ago|reply
But... bots (google, etc) are allowed to get through. I understand the reasons, but then, is it really a paywall? To be honest, I won't bother with cookies and other tricks: logged in, allowed, logged out, walk away.
In the end, I agree: don't bypass paywalls, the whole adblocking argument becomes invalid, if we do. Some publishers are trying to find a new way of income, and bypassing that invalidates the efforts.
(EDIT: treating bots differently is wrong on so many levels. I changed the User Agent of my miniflux RSS reader to "miniflux-legacy (Googlebot for Tumblr)" from "Miniflux (https://miniflux.net)" because otherwise Tumblr shows GDPR cookie consent interstitial for RSS feeds. There really, really shouldn't be a separate version for bots.)
[+] [-] esotericn|7 years ago|reply
The existing model is to treat the Internet as a magazine rack. It doesn't work that way.
A user might read one article from the WSJ a month. Obviously they won't pay 5 USD for that, they'll either attempt to get around the paywall or not bother.
Giving users a few free articles means that they'll just rotate around sites to get what they want. You need to charge a small amount from hit 1.
Not withstanding that it's not hard to build a paywall that actually functions, just don't send the content unless you've paid. This addon relies entirely on the fact that content which has not been paid for gets sent anyway.
Realistically though, the answer is that paid journalism disappears, or the Internet as we know it disappears. Increasingly lately it's looking like both will happen.
[+] [-] icebraining|7 years ago|reply
I know this is unacceptable to those who have a moral opposition to accessing content without accepting the rules, but to those who are just concerned about funding journalism, this approach is essentially equivalent to not bypassing paywalls.
Also, there's a problem with paywalls, which is the enhanced tracking (by subscribing, you're giving them an excellent profile to attach to their analytics). Some people might therefore prefer to use a Bypasser even if they are subscribed.
[+] [-] majortennis|7 years ago|reply
[+] [-] mruts|7 years ago|reply
[+] [-] mostafaberg|7 years ago|reply
[+] [-] icebraining|7 years ago|reply
In both options, the outcome for the site is exactly the same. So, not being a masochist, why shouldn't I choose the outcome that benefits me without harming anyone else?
[+] [-] msravi|7 years ago|reply
Source: https://github.com/iamadamdev/bypass-paywalls-firefox/blob/m...
[+] [-] ckastner|7 years ago|reply
The "Referer" header is an interesting question. Hypothetically, if there's be some kind of renumeration for the site (eg: "I grant access to all of your users to my site and in exchange, for every referred visitor, you pay me 0.01 cent"), then spoofing the Referer would incur a financial cost to Google/Facebook. The incurred cost would be negligible for the individual user, but a case could be made against the extension publisher for the aggregated costs.
[+] [-] esotericn|7 years ago|reply
What?
Wallace and Gromit in a case of... the Wrong Bits!?
[+] [-] bad_user|7 years ago|reply
Of course, the ads revenue model for the Internet only happened because people basically want free labor, being unwilling to pay for the content they consume, screw the publishers the world will survive.
It's sad because what we'll get is legislature and/or more DRM.
[+] [-] icebraining|7 years ago|reply
[+] [-] rambojazz|7 years ago|reply
[+] [-] devilmoon|7 years ago|reply
[+] [-] Cthulhu_|7 years ago|reply
[+] [-] hawos|7 years ago|reply
[+] [-] Cthulhu_|7 years ago|reply
[+] [-] jwl|7 years ago|reply
[+] [-] musharofchy|7 years ago|reply
[deleted]
[+] [-] WorkLifeBalance|7 years ago|reply
[+] [-] darekkay|7 years ago|reply
[0] https://dejure.org/gesetze/UrhG/95a.html
[1] https://www.wbs-law.de/it-recht/verbreitung-einer-anleitung-...
[+] [-] ckastner|7 years ago|reply
If there is just a banner hovering over the actual text, and the extension merely removes that banner, than one could question whether there even was an access control in the first place.
As an extreme example, a Finnish court ruled that CSS (as used by DVDs a long time ago) was ineffective.
https://www.turre.com/finnish-court-rules-css-protection-use...
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] yAnonymous|7 years ago|reply
[deleted]
[+] [-] calimac|7 years ago|reply
[deleted]