I used 12ft for some time and got tired of loading bloated news websites twice. Archive.today is a good alternative but means making copies of crap articles somewhere else, while I just want to read them once. Ultimately I recommend txtify.it, a service that uses Readability, as a more sensible solution.
Why not use the reader viewer feature of Firefox and Chrome? With Firefox I even like the fact that you can use text2speech so very long stories can be put on the background.
Honestly I find that the vast majority of these services don't work. And then when they do work they don't work for long and we're just in a cat and mouse game.
Well this is ironic. Well intentioned developer creates site to bypass paywalls. Site is a popular success, and begins to cost too much money for the developer to maintain. Developer comes up with a solution which is to introduce an optional paywall.
I'm not just poking fun... I actually sympathize because I'm in a similar situation with one of my projects. I've had to reconsider being a 100% ideals-based person and actually set up methods for people to give me money.
I've always appreciated sites offering free content, but now I'm in the unique situation where I'm the one publishing content. It takes a lot of time and energy to put together and my other sources of income can only cover the bills for so long. The harsh reality is that free services require money to operate.
I made a website for piracy when I was a kid and did the exact same thing; was definitely ironic for people to pay for something to avoid paying.
But I think it always has been and will continue to be more of a UX problem than a "wanting stuff for completely free" problem.
Case in point, a couple hours ago I tried to stream the Super Bowl from the official NFL website, but failed multiple times to finish their checkout process (which was only $0.99, well below what I would've been willing to pay), so instead I... went elsewhere.
I have one almost legitimate use for 12ft.io - sharing links for a paid subscription news site where I actually pay the annual fee and the site has it’s own way of letting users share articles with friends and family.
But the built in process is onerous, one has to sign up and they nag you to subscribe which is a long journey to read one article.
So I hope 12ft guy can find a niche like that to sustain a legitimate business.
This person is asking for money in order to help provide a service… and that service is “stop other people from asking for money to provide a service.” Hard not to feel a little irony.
There are approximately three outcomes here.
1. Google shuts this guy down, some way or another
2. Paywalled sites sue this into oblivion
Or worst of all,
3. Google stops being able to usefully surface decent news content.
Paywalls are annoying, but this is not a long-term solution.
How is "Google isn't allowed to scrap site to power their site for free" the worst option. Google can write a few checks to access the data. They have enough money.
I kind of laughed at that, but it seems like asking for trouble. Bypassing paywalls on a slightly underground basis is something the publications might tolerate or live with like ad blockers, but doing it for profit is painting a target on your back.
Also I wonder what kind of hosting 12ft is using, where bandwith costs so much, unless they are using 100s of TB per month or more (maybe they are). These days there is tons of super low cost bandwidth if you bypass the big providers.
The “Why?” section of the home page is rather disingenuous, since the Economist is not “SEO optimized garbage”, nor does it want you to “sign up for some newsletter”. It’s just an excellent newspaper that needs to pay the staff that writes all those articles. Just like 12ft.io needs to pay its hosting provider.
I'd be fine with that if they didn't let search engines crawl their content. If they want to be on the web, they have to play by the rules of the web. Instead what they want is to reap the benefits of the web while refusing to participate in the open web.
I'd be happy to pay The Economist but I won't buy an expensive subscription because I don't read enough of their articles to make it worthwhile. I'd like a browser based wallet which I could use to pay per article with one click.
Then search engine crawlers should get paywalled too.
The motivation is correct. You run a search on google and get mostly paywalled content. I'm fine with news sites requiring subscriptions to view their articles but they shouldn't also get the benefit of being listed at the top of search results for key terms.
Alternatively, the search list should show if content is paywalled or give you search options to remove paywalled content.
1. I pay for The Economist, I want to share articles with my family & friends without them having to jump through the login hoop, until quite recently there wasn't an intention method to do this[1]
2. My partner is an annalist, her work is often quoted and republished in part or in full behind paywalls, going through correct channels to read what they published can take days to weeks & they need to know Pronto!
Edit: Clearly because I'm paying for The Economist I believe they should be remunerated for their journalism, but a hard policy of "no access unless you've verify-ably paid" would be a worse status quo
> The idea is pretty simple, news sites want Google to index their content so it shows up in search results. So they don't show a paywall to the Google crawler. We benefit from this because the Google crawler will cache a copy of the site every time it crawls it.
> All we do is show you that cached, unpaywalled version of the page.
This seems like something that could be accomplished entirely client-side, without incurring bandwidth costs to the server.
Or am I missing something about how this service works?
You can't do it client side. Well-implemented paywalls clip the content server-side. If you're Google, you get the full HTML, if you're not, you don't.
How is 12ft Ladder any different than this Github repo that does the same thing? This is one of the more popular repos as well at 22k stars.
Genuinely curious, as I've just been using [1] and have had no issues bypassing paywalls. If 12ft has access to more sites, or something, that would be a better use case.
The owner of the repo does not even want donations, just a star, otherwise, I'd contribute!
12ft (and outline) request as a new user with a presumably fresh-ish IP address, which is sometimes required when the publication only sends the full article for completely new visitors (ie it can't be removed by clearing cookies and can't be removed by removing some paywall html/js).
It's not any different. It actually can access fewer sites, since some sites have disabled it, presumably by paying the owner? The only advantage of 12ft.io is that you have a shareable link for friends, and that it can be used on mobile.
12ft.io does a great job of bypassing paywalls, which is needed because people want to avoid paying for the content they want to see. What's more, 12ft is basically an open CGI proxy server. The chances that it will be used to death without any efficient way to monetize it seems high.
FWIW I do think it's a great service and is very helpful when you only need to access one article from a site you'll never need/want a subscription for. But I can definitely see there being people taking advantage of it when they want to avoid subscribing to whatever service.
So basically he makes it so people can steal articles... and then he's realizing that yeah, running a website is expensive and that's why companies have paywalls in the first place.
I honestly won't mind paying publishers to read the articles I want to read. However, I'm not interested in reading 90% of the articles from a publication/website. I hope, there will be an easy and seamless way to pay-as-you-read per article or a day/week/month pass.
It is not that I don't know how to "hack" and find ways to read paywalled articles -- almost every paywalled website is un-paywallable. It is that it is very irritating, and taxes the brain.
I pay for quite a few of them, hoping to support the writers, and do not worry about finding ways to find ways around. Eventually, I do NOT read more than 2-3 articles a week from them. Most of the time, months goes by without even stumbling on a single article from such publications.
Generally, for recreational web use, I never use Javascript, or even CSS. I prefer to use a text-only browser.
There are websites that apparently have "paywalls", but without Javascript, I do not even know these "paywalls" exist. I read every article on a website without any indication of any limitations.
The NY Times and The Economist are examples. What people refer to as "paywalls" are simply Javascript annoyances. One has to run the Javascript to experience the annoyance.
Perhaps the "privilege" of running the website's Javascript is a "benefit" of subscription. However I would not run the Javascript regardless of whether I was subscribed or not. That choice has nothing to do with the idea of "paywall". I choose not to run Javascript on any website. This improves the web experience for me in too many ways to count.
News publications could approach susbscription as (a) access versus (b) no access, e.g., password-protection. Yet some publications approach subscription instead as (a) access without Javascript annoyances versus (b) access with Javascript annoyances. Of course (b) only applies if one chooses to run the Javascript.
Every web user has the option not to run other people's code, i.e., Javascript.
The access model that scientific journals use seems to work well enough. Access is granted to an IP address. If the subscriber is on a different IP address, she can get access through a password-protected proxy.
> The idea is pretty simple, news sites want Google to index their content so it shows up in search results. So they don't show a paywall to the Google crawler. We benefit from this because the Google crawler will cache a copy of the site every time it crawls it.
> All we do is show you that cached, unpaywalled version of the page.
They're just showing you Google cache? Like.. what you can get by putting `cache:` in front of a URL in chrome? (Or using an extension in Firefox?).
So, this site gives you access to the same thing by putting `12ft.io/ ` in front of the URL instead of, say, `cache:`? Is there something more to it? That... seems like an interesting thing to ask people to pay for.
[+] [-] john-doe|4 years ago|reply
Bookmarklet: javascript:q=location.hostname+location.pathname;location.href="https://txtify.it/"+q;
[+] [-] 878654Tom|4 years ago|reply
[+] [-] whywhywhywhy|4 years ago|reply
[+] [-] godelski|4 years ago|reply
Similarly Outline: https://www.wsj.com/articles/why-financing-the-multi-trillio...
But the current top comment's txtify did: https://txtify.it/https://www.wsj.com/articles/why-financing... (though this doesn't have pictures)
Honestly I find that the vast majority of these services don't work. And then when they do work they don't work for long and we're just in a cat and mouse game.
[+] [-] o_____________o|4 years ago|reply
[+] [-] poopsmithe|4 years ago|reply
I'm not just poking fun... I actually sympathize because I'm in a similar situation with one of my projects. I've had to reconsider being a 100% ideals-based person and actually set up methods for people to give me money.
I've always appreciated sites offering free content, but now I'm in the unique situation where I'm the one publishing content. It takes a lot of time and energy to put together and my other sources of income can only cover the bills for so long. The harsh reality is that free services require money to operate.
[+] [-] fastball|4 years ago|reply
But I think it always has been and will continue to be more of a UX problem than a "wanting stuff for completely free" problem.
Case in point, a couple hours ago I tried to stream the Super Bowl from the official NFL website, but failed multiple times to finish their checkout process (which was only $0.99, well below what I would've been willing to pay), so instead I... went elsewhere.
[+] [-] quickthrower2|4 years ago|reply
[+] [-] cplusplusfellow|4 years ago|reply
[+] [-] xchaotic|4 years ago|reply
[+] [-] ketzo|4 years ago|reply
There are approximately three outcomes here.
1. Google shuts this guy down, some way or another 2. Paywalled sites sue this into oblivion
Or worst of all,
3. Google stops being able to usefully surface decent news content.
Paywalls are annoying, but this is not a long-term solution.
[+] [-] HWR_14|4 years ago|reply
[+] [-] anonporridge|4 years ago|reply
I'm happy to pay a few cents to read an article. I'm absolutely NOT happy to be forced to sign up for 10 different subscriptions.
[+] [-] throwaway81523|4 years ago|reply
Also I wonder what kind of hosting 12ft is using, where bandwith costs so much, unless they are using 100s of TB per month or more (maybe they are). These days there is tons of super low cost bandwidth if you bypass the big providers.
[+] [-] cranberryturkey|4 years ago|reply
[+] [-] wrs|4 years ago|reply
[+] [-] rhinoceraptor|4 years ago|reply
[+] [-] Day1|4 years ago|reply
[+] [-] postingawayonhn|4 years ago|reply
[+] [-] jsmeaton|4 years ago|reply
The motivation is correct. You run a search on google and get mostly paywalled content. I'm fine with news sites requiring subscriptions to view their articles but they shouldn't also get the benefit of being listed at the top of search results for key terms.
Alternatively, the search list should show if content is paywalled or give you search options to remove paywalled content.
[+] [-] quickthrower2|4 years ago|reply
[+] [-] xanth|4 years ago|reply
2. My partner is an annalist, her work is often quoted and republished in part or in full behind paywalls, going through correct channels to read what they published can take days to weeks & they need to know Pronto!
Edit: Clearly because I'm paying for The Economist I believe they should be remunerated for their journalism, but a hard policy of "no access unless you've verify-ably paid" would be a worse status quo
[1] https://web.archive.org/web/*/https://myaccount.economist.co...
[+] [-] 2OEH8eoCRo0|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] PragmaticPulp|4 years ago|reply
> The idea is pretty simple, news sites want Google to index their content so it shows up in search results. So they don't show a paywall to the Google crawler. We benefit from this because the Google crawler will cache a copy of the site every time it crawls it.
> All we do is show you that cached, unpaywalled version of the page.
This seems like something that could be accomplished entirely client-side, without incurring bandwidth costs to the server.
Or am I missing something about how this service works?
[+] [-] makeworld|4 years ago|reply
https://gitlab.com/magnolia1234/bypass-paywalls-chrome-clean
[+] [-] Swizec|4 years ago|reply
[+] [-] gzer0|4 years ago|reply
Genuinely curious, as I've just been using [1] and have had no issues bypassing paywalls. If 12ft has access to more sites, or something, that would be a better use case.
The owner of the repo does not even want donations, just a star, otherwise, I'd contribute!
[1] https://github.com/iamadamdev/bypass-paywalls-chrome
[+] [-] judge2020|4 years ago|reply
[+] [-] makeworld|4 years ago|reply
Btw, you should switch the better maintained version of the Bypass Paywalls extension: https://gitlab.com/magnolia1234/bypass-paywalls-chrome-clean
[+] [-] whateveriforgot|4 years ago|reply
[+] [-] waffle_maniac|4 years ago|reply
[+] [-] gkoberger|4 years ago|reply
[+] [-] greatjack613|4 years ago|reply
[+] [-] m348e912|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] Brajeshwar|4 years ago|reply
It is not that I don't know how to "hack" and find ways to read paywalled articles -- almost every paywalled website is un-paywallable. It is that it is very irritating, and taxes the brain.
I pay for quite a few of them, hoping to support the writers, and do not worry about finding ways to find ways around. Eventually, I do NOT read more than 2-3 articles a week from them. Most of the time, months goes by without even stumbling on a single article from such publications.
[+] [-] cranberryturkey|4 years ago|reply
[+] [-] 1vuio0pswjnm7|4 years ago|reply
There are websites that apparently have "paywalls", but without Javascript, I do not even know these "paywalls" exist. I read every article on a website without any indication of any limitations.
The NY Times and The Economist are examples. What people refer to as "paywalls" are simply Javascript annoyances. One has to run the Javascript to experience the annoyance.
Perhaps the "privilege" of running the website's Javascript is a "benefit" of subscription. However I would not run the Javascript regardless of whether I was subscribed or not. That choice has nothing to do with the idea of "paywall". I choose not to run Javascript on any website. This improves the web experience for me in too many ways to count.
News publications could approach susbscription as (a) access versus (b) no access, e.g., password-protection. Yet some publications approach subscription instead as (a) access without Javascript annoyances versus (b) access with Javascript annoyances. Of course (b) only applies if one chooses to run the Javascript.
Every web user has the option not to run other people's code, i.e., Javascript.
The access model that scientific journals use seems to work well enough. Access is granted to an IP address. If the subscriber is on a different IP address, she can get access through a password-protected proxy.
[+] [-] charcircuit|4 years ago|reply
Have you considered using hosts that come with more free / unlimited bandwidth?
[+] [-] jokowueu|4 years ago|reply
[+] [-] wutwut5521|4 years ago|reply
[+] [-] spicybright|4 years ago|reply
[+] [-] jrochkind1|4 years ago|reply
> How does it work?
> The idea is pretty simple, news sites want Google to index their content so it shows up in search results. So they don't show a paywall to the Google crawler. We benefit from this because the Google crawler will cache a copy of the site every time it crawls it.
> All we do is show you that cached, unpaywalled version of the page.
— https://12ft.io/
They're just showing you Google cache? Like.. what you can get by putting `cache:` in front of a URL in chrome? (Or using an extension in Firefox?).
So, this site gives you access to the same thing by putting `12ft.io/ ` in front of the URL instead of, say, `cache:`? Is there something more to it? That... seems like an interesting thing to ask people to pay for.
[+] [-] PragmaticPulp|4 years ago|reply
I wonder if they're accessing pages from Google Compute Engine in an attempt to appear as a legitimate Google crawler?
If they're just loading the cached Google pages like anyone else, I don't understand why it's hitting their servers at all.
[+] [-] vimy|4 years ago|reply