Again, it's just insane to me that we don't even much have a meaningful discussion of:
"Hey, wait, literally everyone could have the entire library of Alexandria in their house for a couple hundred bucks per person. Like, all the knowledge ever. Maybe that should be considered the good default of things.
At least one in every town that everyone could use, for free, forever, without restriction to ANY of the knowledge anyone desires."
So, if I get it right. First there was Libgen, which is mirrorable. Then, some Z-Library copied Libgen and added some more books, without making it mirrorable. The goal is to make these new books, which are not mirrorable — mirrorable (i.e. to "preserve" them).
So, why not just re-upload them to Libgen, then? I guess somebody will do that now anyway, but you could easily done it in the first place, without making your own mirror, which is not a mirror of Libgen. Just upload them to Libgen and make a mirror of Libgen.
Excellent. I hope more and more people start to see how absurd and evil the concept of "intellectual property" is. It should be totally rejected and any form of keeping useful information for yourself should be shunned and tabooized. In todays world, many have been programmed to believe that the would could not exits without such immoral restrictions, which is horrible.
I can't imagine getting rid of it completely would have good effects, as it would make any large scale production impossible. You might still get a few blog post, but getting books produced will be tricky and something big like a movie might be outright impossible. This is doubly true in the modern digital world were everything can be copied in a fraction of a second and where the piracy site, not the authors, will be what bubbles to the top of the search results.
It could also result in far more draconian DRM, as that would be the only way left to protect your work.
Now drastically lowering the time of copyright might be well worth it, something in the realm of 20 years should be enough. As copyright needs to get back to a point where things you consumed in your lifetime, make it into the public domain in your lifetime.
I've just spent 2 years writing something which ain't got anything else like it. It was technically pretty difficult and needed a lot of background knowledge.
Should I be disallowed to commercialise it?
I partly get where you stand but if I was in a society that you seem to endorse my first question would be, other than for the love of doing it, why sink so much effort into a thing only to get nothing back. It almost is the opposite of a meritocracy.
creating nontrivial intellectual property usually takes a lot of work, work that has to be paid otherwise it could literally not be done, i.e. the great people who create those works could attempt maybe one such work and in most modern cases they would not get close to finishing it before they literally run out of money to pay for rent and food.
the system around intellectual property has some issues but some form of protection / ownership needs to be there.
if you had your wish and the concept of IP was treated as shunned and taboo you would quickly live in a world with vastly diminished amount and quality of art, science and technology.
There should be a fine line in intellectual property rights. I see where you are coming from - quite often intellectual property is used a a moat to protect insane revenues and, as a repercussion, delay or slowdown our progress as humanity.
But it is also use to protect unique creator revenue and encourage to create more.
If you ask where the fine line should be I have no immediate answer, but abolishing intellectual property rights just like enforcing them at all costs doesn't seem to be the optimal course of action to me.
Such comments remind me provocateur methods used by police to suppress any legitimate critique. It works like this:
1. there is some legitimate issue
2. people protest (peacefully)
3. a provocateur does something over the top (violence, absurd statements like "defund the police")
4. legitimate protesters are discredited because of 3.
If you want to drastically reduce the number of new books, songs, content then sure.
Otherwise, I am having a really hard time understanding how can you suggest that I don't own the book I spend a *decade* to write. It is just as mine as the car you drive is yours.
To tighten regulations around intellectual property to make sure that it is not abused - sure.
Most people writing books, e.g. these, https://www.oreilly.com/ would not do so if they could not monetize it - which would be even harder if there was no legal protection of the work they did. I don't like your idea at all.
Its funny that often the only people who think this are programmers. everyone else who hopes to make a living doesn't. ATM, not even the NYT Best Sellers make a livable wage and now with Dall-E, artists wont either. COVID basically killed a lot of musicians income. DJs and so on. Its probably also why media itself has achieved such a mediocre state, no authors, no novels, no adaptation. we are in a future which much of the current media is a sea of mediocrity. There's lots of content sure, but barely any that's worth a damn.
Interesting to call it evil, when your whole reasoning is driven by pure greed. I guess you don't see greed as something evil? Or do you think there is a human right to consume? That everyone has the right to experience and own everything for free, what others have created and worked hard for?
Anyway, "intellectual property" has proven to be a driver of quality, as the earned money gives liberty and time for the creators. I don't see how this is a bad thing. Sure, there are warts in the system and we should get rid of them, but not by removing the whole good side.
I think making books, and knowledge in general, available to everybody is an essential public service. At a time when disinformation is so widespread, actual data and understanding are essential weapons against it.
In that sense, these piracy sites are acting like global public libraries open to everybody with an internet connection.
At the same time, I feel the authors, researchers, editors, and other support staff that gift the world with knowledge should be rewarded for their effort.
It'd be great if there's an honor system that enables readers around the world to pay them some amount to show gratitude.
The current system is of two extremes -- either first pay the price set by the publisher to even browse a book (and that price is ridiculously high in underdeveloped countries), or get the full book without paying anything.
There should be a spectrum of rental and gratitude amounts in between. The publishers themselves can together set up such an online library to make it all legal. Not only will they help humanity, but they'll also get some of the revenue they're currently missing out on. A balance seems to have been struck in the music business with most of it being legal and accessible nowadays. They should do it for books too.
Also, the information about nuclear, chemical and bio weapons should be accessible to everyone. Preferably as DIY recipes, that you can follow at home.
If you spend time learning, spend time putting together your knowledge, and spend time sharing, it is very sensible you expect some economic return if you choose to. Alternatively you can Open Source. It’s the authors choice.
But we live in an infant society, with many grown ups acting like spoilt childrens, saying “I want to get that fancy FAANG job, I want to be wealthy, and I expect to do it copy-pasting others people knowledge and infringing IP, but if someone else begs to differ I start whining”
I see things like this, and I wonder why the following software doesn't exist:
I want a piece of software to which I can add a collection of files, say multiple TB. The software will then behave a bit like a BitTorrent tracker, and know which peer has which files. A peer joining this swarm will be able to say "I want to donate X GB of space", and the tracker would tell it "OK, then download and seed these files, which are the least seeded".
The peer would download the files from the rest of the swarm and make them available to it. Then, a request layer on top of the swarm could be used to request a file from the peer which had it. Adding/removing files to this collection would also need to be a feature.
Does anyone know if anything like this exists? If not, how easy would it be to make something like it out of BitTorrent? I might give it a go.
Have always thought this would be a great way for many people to share well organized Plex libraries. Many semi-overlapping libraries basically creating a virtual Netflix, with some way to stream in Plex from the whole library no matter if you actually have the content locally or not
Freenet is built around similar ideas, combined with encryption an anonymization. Which hopefully adds the benefit of you not being legally liable of distributing CSAM. The gist of Freenet is that you can operate a freesite or upload files, and the content is redundantly dispersed in encrypted parts among a number of other Freenet peers. They don't know what they're hosting and neither do you, the client just fills up the allotted space and uses some bandwidth, that's all.
Regarding of how easy to make something like this network, I'd wager it's pretty hard. There will be a lot of questions, even while establishing the happy path, for example how you manage the updates, especially when you update the protocol, not just the software, and how you effectively manage the volume of search requests, how you distribute the files etc.
And then there's the abuse the network will inevitably get. How you handle spammers, CSAM, malware, ISPs that throttle/block you, the legal risk you put your clients up to, etc. Nice big can of worms. To begin opening it, I suggest a reading through Wikipedia's Peer to peer file sharing article, and especially the File sharing modal on the right, which nicely captures the ideas that have been tried so far.
I think BitTorrent has all the pieces needed for a fully distributed version of your idea. My initial thought is that you could publish a magnet link that points to a mutable DHT item, which in turn points to a torrent that has a JSON file with some metadata and a list of infohashes the publisher cares about. The client could then scrape the "leaf" torrents from multiple lists to get the peer counts and use that for local prioritization of what to store. By reusing existing torrents you could then share resources with standard torrent clients that are unaware of your system.
The list idea could be extended to nested lists (stavros recommends Internet Archive) for discoverability and composition.
If you go with v2 or hybrid torrents from the beginning you could deduplicate and cross seed files from different collections.
The lists could also be modified to have torrents to exclude, possibly using some salt + rehash idea to make it hard to reverse into a list of e.g. CSAM you don't want to publish as is.
Feels like a neat project that could interoperate nicely with existing torrents.
Donating space is only half of the equation. I think donating bandwidth is a more significant aspect, especially with ISP's like Comcast which provide very little upload bandwidth compared to download. You'd expect that uploads wouldn't impact download speeds, but it's not the case. A saturated upload bandwidth means, ACK packets getting delayed, which means connections would be established way more slower. So, it's not a feasible prospect unless the competition takes over.
Something similar exists: iabackup[1][2]. It is designed to host an independent copy of (some of) the Internet Archive using git-annex. You tell it how much storage you want to donate and git-annex fills your disk with data from the least-seeded files IIRC. Its focus is on data backup, not data serving though.
For your idea, once all the local storage everywhere is filled up with evenly distributed redundant copies, and then a new file is added, would peers arbitrarly choose other files to delete in order to make room for the new file?
Private trackers do this somewhat via incentives. Files with less seeds earn you more bonus points. If you want to farm BP you can sort files by seeders and download poorly seeded torrents.
I keep thinking in my mind this makes sense but then ponder about the "local" impact - what would a user "donating space" be storing locally and might risk? Where does the risk lay, is it with the uploader, the seeder, or the service provider?
I would love to see this pickup but it feels as though it would devolve into one of the listed projects.
It "sounds" as though it should use some kind of Distributed Ledger Technology, IPFS seem the most fit, maybe some kind of next layer that offers just this service (IPFS being GDPR compliant might mitigate some risk?)
I wonder if there are any search engines dedicated to indexing these kinds of libraries. I know there's a decent one just for scihub, but it would be awesome if I could do a Google-style search that returned the contents of books, magazines and journal articles instead of just websites.
To be fair, Z-Library doesn't charge unless you want to download more than 10 books per 24 hour period. That's per account and although they ask you not to open multiple accounts they don't seem to do anything to stop you.
It's really funny to think about how the advances of technology keeps changing how we perceive books.
7TB is even a commodity disk these days. And it's a lot less than the torrent of scientific papers that floated around some time ago (that was ~18TB IIRC).
One of the projects on my secret TODO list is feeding Libgen into Elastic Search to get a cross referenced full text search. For now the hardware is prohibitively expensive, but time is on my side. I'm sure redundantly indexing a few 10s of TB will become trivial before the end of the decade.
Hm, from some site on the internet, this are the stats:
Enabled users: 5629
Active today: 986
Active this week: 2542
Active this month: 4228
Torrents: 566598
Total Size: 23.40 TiB
Retail Torrents: 421313
Creators: 401395
Seeders: 2421540
Leechers: 232
Snatches: 16523563
Transferred: 398.22 TiB
And those are only books. Library of Alexandria is already here.
The only problem is that there is so many books and so little time :( , which might be a far bigger problem than book accessibility. To find the time to read them.
There is at least 1/8 of books that I would love to read. But I don't have the time to actually pull it off, even if I am not doing any social networking etc., but the amount is really huge. Maybe, for a startup, someone could index all the content, not only share it.
I would love to contribute on seeding the torrents. However what happens when there are new books in z-lib? Is this only 1 time mirror? How do we expand the collection after a certain amount of time.
[+] [-] jrm4|3 years ago|reply
"Hey, wait, literally everyone could have the entire library of Alexandria in their house for a couple hundred bucks per person. Like, all the knowledge ever. Maybe that should be considered the good default of things.
At least one in every town that everyone could use, for free, forever, without restriction to ANY of the knowledge anyone desires."
[+] [-] krick|3 years ago|reply
So, why not just re-upload them to Libgen, then? I guess somebody will do that now anyway, but you could easily done it in the first place, without making your own mirror, which is not a mirror of Libgen. Just upload them to Libgen and make a mirror of Libgen.
[+] [-] plmu|3 years ago|reply
[+] [-] grumbel|3 years ago|reply
It could also result in far more draconian DRM, as that would be the only way left to protect your work.
Now drastically lowering the time of copyright might be well worth it, something in the realm of 20 years should be enough. As copyright needs to get back to a point where things you consumed in your lifetime, make it into the public domain in your lifetime.
[+] [-] zasdffaa|3 years ago|reply
Should I be disallowed to commercialise it?
I partly get where you stand but if I was in a society that you seem to endorse my first question would be, other than for the love of doing it, why sink so much effort into a thing only to get nothing back. It almost is the opposite of a meritocracy.
[+] [-] fartsucker69|3 years ago|reply
the system around intellectual property has some issues but some form of protection / ownership needs to be there.
if you had your wish and the concept of IP was treated as shunned and taboo you would quickly live in a world with vastly diminished amount and quality of art, science and technology.
[+] [-] jesterson|3 years ago|reply
But it is also use to protect unique creator revenue and encourage to create more.
If you ask where the fine line should be I have no immediate answer, but abolishing intellectual property rights just like enforcing them at all costs doesn't seem to be the optimal course of action to me.
[+] [-] d0mine|3 years ago|reply
[+] [-] risyachka|3 years ago|reply
Otherwise, I am having a really hard time understanding how can you suggest that I don't own the book I spend a *decade* to write. It is just as mine as the car you drive is yours.
To tighten regulations around intellectual property to make sure that it is not abused - sure.
To ban? Obviously never.
[+] [-] noselasd|3 years ago|reply
[+] [-] nmz|3 years ago|reply
[+] [-] slightwinder|3 years ago|reply
Anyway, "intellectual property" has proven to be a driver of quality, as the earned money gives liberty and time for the creators. I don't see how this is a bad thing. Sure, there are warts in the system and we should get rid of them, but not by removing the whole good side.
[+] [-] lovelearning|3 years ago|reply
In that sense, these piracy sites are acting like global public libraries open to everybody with an internet connection.
At the same time, I feel the authors, researchers, editors, and other support staff that gift the world with knowledge should be rewarded for their effort.
It'd be great if there's an honor system that enables readers around the world to pay them some amount to show gratitude.
The current system is of two extremes -- either first pay the price set by the publisher to even browse a book (and that price is ridiculously high in underdeveloped countries), or get the full book without paying anything.
There should be a spectrum of rental and gratitude amounts in between. The publishers themselves can together set up such an online library to make it all legal. Not only will they help humanity, but they'll also get some of the revenue they're currently missing out on. A balance seems to have been struck in the music business with most of it being legal and accessible nowadays. They should do it for books too.
[+] [-] dandanua|3 years ago|reply
Also, the information about nuclear, chemical and bio weapons should be accessible to everyone. Preferably as DIY recipes, that you can follow at home.
[+] [-] NaturalPhallacy|3 years ago|reply
[+] [-] freetanga|3 years ago|reply
But we live in an infant society, with many grown ups acting like spoilt childrens, saying “I want to get that fancy FAANG job, I want to be wealthy, and I expect to do it copy-pasting others people knowledge and infringing IP, but if someone else begs to differ I start whining”
[+] [-] stavros|3 years ago|reply
I want a piece of software to which I can add a collection of files, say multiple TB. The software will then behave a bit like a BitTorrent tracker, and know which peer has which files. A peer joining this swarm will be able to say "I want to donate X GB of space", and the tracker would tell it "OK, then download and seed these files, which are the least seeded".
The peer would download the files from the rest of the swarm and make them available to it. Then, a request layer on top of the swarm could be used to request a file from the peer which had it. Adding/removing files to this collection would also need to be a feature.
Does anyone know if anything like this exists? If not, how easy would it be to make something like it out of BitTorrent? I might give it a go.
[+] [-] dchuk|3 years ago|reply
[+] [-] npteljes|3 years ago|reply
https://en.wikipedia.org/wiki/Freenet
Regarding of how easy to make something like this network, I'd wager it's pretty hard. There will be a lot of questions, even while establishing the happy path, for example how you manage the updates, especially when you update the protocol, not just the software, and how you effectively manage the volume of search requests, how you distribute the files etc.
And then there's the abuse the network will inevitably get. How you handle spammers, CSAM, malware, ISPs that throttle/block you, the legal risk you put your clients up to, etc. Nice big can of worms. To begin opening it, I suggest a reading through Wikipedia's Peer to peer file sharing article, and especially the File sharing modal on the right, which nicely captures the ideas that have been tried so far.
https://en.wikipedia.org/wiki/Peer-to-peer_file_sharing
[+] [-] zidel|3 years ago|reply
The list idea could be extended to nested lists (stavros recommends Internet Archive) for discoverability and composition.
If you go with v2 or hybrid torrents from the beginning you could deduplicate and cross seed files from different collections.
The lists could also be modified to have torrents to exclude, possibly using some salt + rehash idea to make it hard to reverse into a list of e.g. CSAM you don't want to publish as is.
Feels like a neat project that could interoperate nicely with existing torrents.
[+] [-] sedatk|3 years ago|reply
[+] [-] DataWraith|3 years ago|reply
[1]: https://git-annex.branchable.com/design/iabackup/ [2]: https://wiki.archiveteam.org/index.php/INTERNETARCHIVE.BAK/g...
[+] [-] thrdbndndn|3 years ago|reply
For example, Perfect Dark, Winny, and to less extent, Share (which is more similar to eDonkey/eMule).
[+] [-] sgtnoodle|3 years ago|reply
For your idea, once all the local storage everywhere is filled up with evenly distributed redundant copies, and then a new file is added, would peers arbitrarly choose other files to delete in order to make room for the new file?
[+] [-] er4hn|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] throwaway742|3 years ago|reply
[+] [-] WhoIsSatoshi|3 years ago|reply
[+] [-] generalizations|3 years ago|reply
[+] [-] dkjaudyeqooe|3 years ago|reply
[+] [-] PostOnce|3 years ago|reply
I believe in free access to education, but charging for these books they have no rights to is a whole other thing.
[+] [-] terrycody|3 years ago|reply
[+] [-] uniqueuid|3 years ago|reply
7TB is even a commodity disk these days. And it's a lot less than the torrent of scientific papers that floated around some time ago (that was ~18TB IIRC).
[+] [-] moritonal|3 years ago|reply
Books are naturally immutable, and could be structured into sub-categories whilst enjoying the benefits of deduplication.
[+] [-] hour_glass|3 years ago|reply
[+] [-] cookiengineer|3 years ago|reply
Somewhere around last year they had datacenters burning down due to a natural disaster, but I'm hoping they can recover. It's an amazing project.
[1] https://the-eye.eu/
[+] [-] flanfly|3 years ago|reply
[+] [-] Larrikin|3 years ago|reply
[+] [-] stiray|3 years ago|reply
The only problem is that there is so many books and so little time :( , which might be a far bigger problem than book accessibility. To find the time to read them.
There is at least 1/8 of books that I would love to read. But I don't have the time to actually pull it off, even if I am not doing any social networking etc., but the amount is really huge. Maybe, for a startup, someone could index all the content, not only share it.
[+] [-] AdmiralAsshat|3 years ago|reply
They couldn't even spring for a Let's Encrypt cert?
[+] [-] sacrosanct|3 years ago|reply
[+] [-] robertwt7|3 years ago|reply
[+] [-] Tepix|3 years ago|reply
[+] [-] GTP|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] pilimi_anna|3 years ago|reply
[+] [-] _joel|3 years ago|reply