(no title)
dvasdekis | 2 years ago
The Usenet network itself is always online and highly resilient - most providers offer ~5000 days of binary retention, and endless retention on text - and great bandwidth to boot. If a user doesn't have Usenet, or the Usenet isn't at the 'current' timestamp, that's where the Tor/P2P layer could kick in. You would only need a single server (with a private key, trusting the public key in the main executable) that continuously archives new posts to Usenet to make it work.
holmesworcester|2 years ago
mrusme|2 years ago
Your thought sounds interesting, however I'm not sure I fully grasp the details correctly, so bear with me. Generally speaking though, integrating with the actual USENET these days poses a few hurdles, one of which is as plain as it gets:
Finding well-maintained libraries to do so, especially with more modern stacks (e.g. https://github.com/search?q=nntp+language%3AGo&type=reposito...). Depending on how exactly you thought to integrate with USENET servers, it might look even more meager in regard to UUCP libraries. And yes, of course you could bind e.g. existing C libraries and so on, but I'd still argue that it's not the most straight forward developer experience that you'll get -- unlike with more modern technologies that either provide language bindings or other means of integration (gRPC, websockets, etc).
But apart from this, one key difference to keep in mind in regard of especially resilience is that, with USENET, resilience depends on the number of active servers that are willing to offer unfiltered access to the content, meaning that the game of whac-a-mole is in theory slightly more predictable for an attacker or oppressor trying to limit access to the network. On the other hand with projects like I2P, Tor, or IPFS, every new client that connects to the network is also/can also be a relay at the same time, that an attacker or an oppressor would need to find and neutralize in order to successfully block the entire network.
We might as well not forget that many USENET servers are paid infrastructure these days. For someone who lives in a developed country, this might not be an issue. However, being unable to pay for your access simply because you don't have the resources, because you are unbanked, or because your government took the easy path of sanctioning financial transactions to either providers of such services, or specific payment providers in general, in an effort to curb the use of the network, makes USENET theoretically more prone to censorship than for example IPFS.
One instance where this government intervention is rampant are VPNs, which similarly rely on a legal entity that provides the server-side of the network. There are countries that have either outlawed these type of paid services altogether, or made the companies bend over, in an effort to limit freedom of access to information. In a theoretical scenario in which USENET would re-gain traction and become a more mainstream service, it would be fairly easy for governements to sanction the legal entities that provide access to the network. And there would be little alternative, as with the amount of data on USENET, it would be quite expensive for individuals to offer free, unfiltered USENET access to others. On the other hand, there's nothing that could be sanctioned with IPFS or similar peer-to-peer services. The use of this type of software might be made illegal in general, but cracking down on it, on an individual basis, is significantly harder.
Besides, the account requirement and setup for USENET also makes it more complex for an end-user to get onto the network, as compared to IPFS, where one can basically just download and run Kubo (and use a browser extension to access the local gateway). However, from what I understood, your idea would not imply each user having an individual USENET account, but rather having own USENET servers that trust the client no matter of the user that's using it, which thinking about it, might come with its own set of challenges.
I would argue that the resilience we experience with USENET was/is partially due to the the lack of interest from many otherwise censor-trigger-happy parties, simply because unlike Tor (https://www.youtube.com/watch?v=YlZZQYLIXe8) or the Signal messenger (https://www.aljazeera.com/news/2021/1/26/iran-blocks-signal-...), it's not a mainstream technology that's used by everyone and their dog.
To get back to the topic at hand, I would rather not implement USENET or any client-server-based system as a sort of backup for an otherwise P2P app (e.g. Superhighway84), as I tend to agree with what OP stated in a different comment, which is...
> The thing that frustrates me about free and open source software that requires servers is: most people don't have servers! And the prevalent model for using others' servers involves a terrible power / dependence relationship. One thing that drives me to build Quiet is that I want to see a world where free software comes "batteries included" with all of its promised freedoms and advantages, for the vast majority who do not have servers.
A software landscape in which end-user applications are not dependent on dedicated servers at all, and would instead be able to directly communicate and exchange information with each other, is ideally how I, too, would envision the future. Hence, while I'm a fan and user of USENET, XMPP, IRC and so on, and I have the knowledge and can afford the /luxury/ of renting servers to host these kind of things, I'm far from being the average end-user. I believe that the future should belong to truly peer-to-peer decentralized technologies.
dvasdekis|2 years ago
RE the libraries, while it is true that I can't find anything made specifically for Usenet in Go, Usenet itself is just an extension of the Telnet protocol[0], and there are Telnet clients in Go[1] and Node[2]. It probably isn't simple, but I'm sure working with OrbitDB wasn't easy either!
RE the resilience of content on Usenet, the vast majority of binaries are heavily encrypted, and doesn't make sense to anyone without the key, despite being conveyed en-masse between the world's 10-ish full-scale Usenet backbones [3]. I'm proposing that the backend of a service that makes use of Usenet could be similar, with a single 'background server' on one trusted machine enough to continuously push the history to Usenet. A regular user client could then search for the latest version of this history and quickly refresh their side from Usenet, regardless of the status of IPFS at the time.
RE democratic access to technology, at least with Superhighway84 it was very expensive for me to actually run the software, as I have a small allocation of bandwidth from my ISP and not much I can do about that in my area, and I ultimately had to delete it due to ongoing transfers of 3GB/day running the IPFS node. Quiet itself notes a limit of 30-100 individuals with its application - I'm proposing that using the one remaining federated multicast technology with some modern encryption might help with issues around blasting data everywhere from a bandwidth-constrained environment. I know that definitely in Africa, there are ongoing issues with bandwidth and networks that we forget about in the West. Usenet, with extremely lean network overheads, could be part of the answer.
I do agree with your vision of a future of truly peer-to-peer technologies, but for those of us who are bandwidth-constrained or otherwise limited in our access to those technologies, having a technology-agnostic application that just 'does magic' to do whatever it needs to do with your content is what's going to make a majority of users happy.
[0] https://www.itprotoday.com/windows-78/how-can-i-use-telnet-a... [1] https://github.com/reiver/go-telnet [2] https://www.npmjs.com/package/telnet-client [3] https://svgshare.com/i/oti.svg