(no title)
ttgurney | 3 years ago
I agree that curl is pretty big and bloated. I would not call it a deficiency that Links et al. don't depend on it.
I mostly just was thinking that since I already have curl on my system, it'd be nice to have a browser that reuses that code. Especially since curl has upstream support for the much smaller BearSSL rather than depending on OpenSSL/LibreSSL.
1vuio0pswjnm7|3 years ago
I like the idea of BearSSL but it has no support for TLS1.3.
I am not a fan of TLS but alas it is unavoidable on today's www. Keeping up with TLS seems like a PITA for anyone maintaining an OpenSSL alternative or even a TLS-supported application.
This is why I pick stunnel and haproxy. These are applications that seem to place a high priority on staying current. Knock on wood. I am open to suggestions for better choices if they exist.
There are many TCP clients to choose from. Before TLS took over the www, it was more popular to write one's own netcat.
I have focused on writing helper applications to handle the generation of HTTP. Thus I can use any TCP client, including old ones that do not support TLS.
The "web browser" is really the antithesis of the idea underlying UNIX of small programs that do more or less only one thing. Browsers try to do _everything_.
This is not appealing to me. I try to split information retrieval from the www into individual tasks. For example,
The cURL project's curl binary combines all these steps. It has a ridiculous number of options that just keeps growing.For me, step 5 really does not need to be combined with steps 1-4 into the same binary. I am able to do more when the steps are separated because it allows me more flexibility. To me, one of the benefits of the "UNIX philosophy" is such flexibility. No individual program needs to have too many options, e.g., like curl. Programs can be used together in creative ways. I see the presence of a large number of options in a program like curl as _limiting_, and creating liabilities. If the author has not considered it as something a user "should" want to do, then the program cannot do it. Adding large numbers of options is also a way of catering to a certain type of user with which I generally do not agree. It is a form of marketing.
For step 4, curl is overkill. It has always suprised me that UNIX has not included a small utility to generate HTTP. Thus, I wrote one.
For step 5(a), Links has served me well. I am open to suggestions for a better choice but there are few people online who are _actual_ daily text-only www users that comment about the experience.^1 An HTML reader/printer, without any neworking code, is another small program that should be part of UNIX.
For step 5(b) I have written and continue to write small programs to do this, sort of like file carvers such as foremost but better, IMO. However I will often use tnftp for convenience.
I used tnftp for many years as the default ftp client on NetBSD and prefer it over (bloated) curl or wget. It is small enough that I can edit and re-compile if I want to change something. Because it comes from NetBSD project the source code is very easy on the eyes.
1. IMO, no sane _daily_ text-only www user today would use Lynx. Whenever anyone mentions it as a text-only browser option then I believe that person is not likely to be a _daily_ text-only www user. Lynx is bloated and slow compared to Links and the rendering is inferior, IMHO.
marttt|3 years ago
Would you mind sharing some of that code?
Some of your recent comments on web browsers, text browsers and javascript [1 + its follow-up] are really interesting. Thanks for sharing.
1: https://news.ycombinator.com/item?id=32131901