This is where I disconnect. The internet is not broken. Maybe arguably the _web_ is broken, and specifically, web pages are broken (the HTTP protocol is still a wonderful thing).
I wish technical authors would stop making the internet-is-broken meme when they really mean the web is broken. Sure, there's plenty broken with the internet in other ways (DNS, encryption, governments, ddos's, etc.), but let's not make the mistake that the internet equals the web.
The Internet is broken too, we've been slowly lulled into a scenario where there are first-class and second-class and even third class netizens, and the third class, I'd not even consider as having an Internet connection.
First class netizens are publicly routable nodes with statically allocated IP addresses. These have real, honest Internet connections, they get to participate in the global community of humans and their beloved machined.
The second class is like the first but the service provider has imposed restrictions on their ability to communicate freely with other machines, such as blocking any packets they may send on specific protocols and ports, notably, TCP port 25, meaning these people cannot send their own email.
The third class is like the second, but they do not have statically allocated IP addresses, they therefore cannot reliably and consistently participate in the global community and must jump some amount of hoops to even participate in a limited way.
There is a hidden fourth class too, these cannot be considered as having a connection to the Internet, they cannot participate, only passively consume existing services, their machines are not connected to the internet, but another network, and their service-provider will just barely allow them to make requests to services on the real Internet, but it will not route any new connections to them, they are cut off and isolated, the silent majority. And this is where the brokenness of the Internet truly shines. This mode of connection should be illegal, and hiding yourself in this way should be an active choice on the part of the individual, not imposed by their "service provider".
But almost all websites are requiring an HTTPS and you know what? I have a dozen of dumbphones who can not access webpages because HTTPS is no longer supported by vendor of device or absolutely can not access https because handshake process takes too long time for gprs (if no edge). If HTTP is so wonderful than why I can not use it without all the stupid security?
"The internet is broken" he says; a moment later he greets his wife he met on tinder, then he opens a 3blue1brown video, a YouTube channel that he himself admits is the first teacher in his life that made math truly click in his brain, in the room right after his son is playing Minecraft with his best friend, a little girl from Japan who he has never met but already knows a bit of japanese thanks to their long chats over Discord. "Oh so broken" he sincerely lamented.
This is a cute and cool dive into taking backwards compatibility to absurd extremes. There’s one bit I might take overly-serious issue with (I hope the author will take this as non-serious criticism and an amused engagement with the thought process):
> That said, browsers that predate CSS do not know what to do with <style> tags, and as a result simply print the styles out at the top of the page. It's pretty ugly and, depending on how much CSS you have, borderline unusable.
> […]
> Forgotten Link Tags
I know external vs inline CSS is pretty much never going to be an answered question, but I feel like there’s a missed opportunity here. In particular because
- External CSS by link element addresses this much better for the backwards-compatible extremes: why even serve the contents of the CSS at all if the browser will just treat it as a comment? That’s just precious bandwidth wasted.
- Taken to extremes, external CSS relaxes the use CSS sparingly principle if you strategically break up the link tags to align well with likelihood of support/value ratio. Using media attributes to split CSS payloads can improve UX for old/underpowered mobile devices and even desktops/laptops with lower resolution screens. Detecting support for (ahem) @supports can open all kinds of CSS minimization doors for newer but lower power devices. Segmenting stylesheets by capabilities and preferences allows compatibility improvements regardless of point in time!
And a minor nit: if you’re dithering images for file size it’s worth comparing against a non-dithered version with a small color palette. The latter might actually be better for everyone!
Oof and that prompted a larger one: you can improve UX for ~every real visitor with <picture> tag and more efficient formats, and you could also sorta skirt the no-JS rule and upgrade pre-picture browsers to PNG with a simple inline onerror.
I’ve been living in backend-land for so long I’ve never actually used the <picture> tag. I will have to take a stab at it and see how the legacy browsers treat it, because if I don’t have to use GIFs, I won’t.
As for the @media tags, I do utilize them to a degree, just to make everything render nicely on mobile and to support dark-mode. But (to put it cheekily) I’m more concerned with backwards compatibility than forwards compatibility :p
Externally linked sources introduce a backward-compatibility problem of their own. E.g, I recall early versions of IE only supporting inline JS. So, if you want to support as much as possible as far back as possible, inlining is the way to go.
(Since any extensions had this problem, when they were introduced, they all support enclosing HTML comments. Otherwise, there wouldn't have been any chance for becoming broadly adopted.)
Regarding dithering: I recall a few sites with duotone dithering, which was a nice effect. And, of course, to bring down file sizes and optimize results, you could always manually compose from dithered and undithered images using the same palette.
Regarding https, one thing I like to do on my personal websites is listen if the client actually wants to upgrade protocols instead of forcing https on everyone.
set $need_http_upgrade "$https$http_upgrade_insecure_requests";
location / {
if ($need_http_upgrade = "1") {
add_header Vary Upgrade-Insecure-Requests;
return 301 https://$host$request_uri;
}
index index.php index.html;
try_files $uri $uri/ /index.php?$query_string;
}
Its pretty straightforward to do in nginx, and my websites remain usable in IE5, Contiki, various feature phones.
This has enabled man in the middle attacks even for clients that want to upgrade protocols. An ISP or owner of the Wi-Fi network can just quietly drop all upgrade security headers.
One other trick to do this is include a https resource (image/CSS/JS) in the http page, and on https use the HTTP header that forces https for the domain. So then if the resource loads successfully, future loads of the site go to https. Browsers that don't support Upgrade-Insecure-Requests often support the https-only header.
Reading the article, I was just thinking there should have been an Accept-Protocol header, but now I see that a limited version of that exists as Upgrade-Insecure-Requests.
I don't think that the advice to offer non-HTTPS is good: it exposes your users to downgrade (SSL stripping) attacks. Even extremely old browsers supported HTTPS: it was added to Netscape in 1994, and Internet Explorer in 1995 (IE2). You shouldn't have to give up security for users of modern browsers in pursuit of backwards compatibility.
(It might be a bit tricky to find an HTTPS configuration that supports for both modern and extremely old browsers, but it should be possible)
I don't think it's possible to use modern HTTPS with old browsers. All the old ciphers are now insecure and obsolete. Even if you supported the old ciphers, what would be the point, since they're insecure? So just provide plain HTTP.
For the majority of users, man-in-the-middle attacks (by someone other than your ISP) will never be an issue. It's mostly a theoretical problem. Your connection at home (and your laptop) is as safe as your Wifi connection. Your mobile connection is probably more secure. And there is no hacker sitting in your coffee shop waiting to p0wn your connection to Facebook or send you a 0day. HTTPS is necessary for the whole world to trust e-commerce, but saying everything has to be encrypted is ridiculous.
The most likely MitM anyone will ever experience is DNS cache poisoning, and that's pretty rare.
The site is compatible with IE1, HTTPS would break that. Unacceptable.
And I don't believe it is possible to have an HTTPS configuration that suits both old and new browsers since new browsers regularly deprecate older versions of SSL/TLS. I think that anything less than TLS 1.2 is deprecated in many browsers now. and TLS 1.2 is from 2008, way too modern for a website focused on compatibility.
For compatibility, HTTP is your only choice, secure protocols are likely to deprecate regularly as new vulnerabilities are found and stronger protocols are made.
I disagree completely. Sometimes backwards compatibility is more important.
There are applications where you want maximum security (e.g. banking) and there are others where it is not only not necessary, but also a hindrance (ART, for example)
1. When I say "as backwards compatible as possible," I mean that this website will be usable on as many browsers, connections, and hardware as I can reasonably support
2. Raw HTML is Your Friend Remember the <FONT> tag? And <TABLE> based layouts? What about <CENTER>? I do
3. I won't go into the technical details here (Low Tech Magazine does a far better job explaining it than I can), but the thing to know is that dithering allows you to reduce the filesize of your images by reducing the amount of information it takes to render them
4. OldWeb.Today The most frequently used tool in my arsenal, oldweb.today is a website that allows you to emulate a number of different retro browsers (from NCSA Mosaic to Netscape Navigator) from directly within your own browser
I practice a lot of these things on my website locserendipity.com, and the associated search engine, which is the only one that you can download with a working index, albeit it is limited to only around a million entries.
This is great. I'm not a front-end dev but I have a low-traffic personal site where I like to experiment. One of my hobbies is to find out how small, lean and compatible with old browsers I can make it, while still looking modern-ish (very subjective I know) and with _some_ interactivity that can degrade gracefully.
So far it's pretty much usable in IE6 and Mosaic 2 (1993).
I used to love Caddy. Not so after the move to 2.0 when they got the json formatted config thing going on. Of course there is still Caddyfile but most of the time I'm bashing my head on their json focused documentation page to find out how can I do that same thing from a config file. I'm pretty sure there might be somewhere but I don't want to read a book I just need a nice quick web server that runs from a single binary like v1 did.
Supporting old web standards is neat, and you make your goal with this clear in your article, but I can't help but wonder if just using a subset of the newer standards isn't a better idea in the long run.
HTML5 standardized a lot of the browser-specific features that plagued websites, and did away with a lot of ways to do the same thing with subtly different results. Supporting all these old methods and standards for compatibility purposes is part of the reason modern web browsers are so huge (the other part being overzealous standards), so I wonder if someone couldn't go and try to figure out a subset of the current standards that are simple to implement, so simpler browsers can be made for older machines, and people have an easier time making sites that can run on old machines without digging into arcane behavior of old browsers, and having to hackily support both.
> My daily driver is a mid-2012 MacBook Pro that will stop receiving all updates (security included) from Apple by the end of this year. While I personally intend to keep it alive with Linux, this isn't a path that is readily available to most people. What are most people supposed to do in this situation to save some money and avoid adding more junk to our landfills?
I’m in the same boat with the same machine. Have you heard of Open Core Legacy patcher? The 2012 machine is apparently the cut-off for ‘everything works’ but I’m really curious about the definition of ‘works’. Also, the security implications of using it, of course.
Why not serve .txt and be done with it? Sometimes I wonder, who is the styling for - especially on personal sites/blogs. Repeat viewers/readers come for the text, if they do at all (otherwise it is their favorite reader).
If someone is using a screen reader, low-contrast/high-contrast setting, translator - plain txt simply works. For more intricate content like schematics, equations, offer a link to PDF.
I am not sure if the lack of mention of Lynx was an oversight.
I started writing my personal website in 1995 and since than did not change much about usage of HTML. So, really no surprise that it still looked pretty okay when I opened it in the NSCA Mosaic 2. Of course, some images are missing and some JavaScript animations do not work. I do use NOSCRIPT. I am using a plain text editor (which has a function to follow local links and open them) for editing my website in HTML. I am also using a program for checking the HTML.
Almost all "other" protocols, have been replaced by HTML over HTTP. (from chat applications, web interfaces for mail, forums instead of newsgroups, etc.)
[+] [-] taftster|3 years ago|reply
This is where I disconnect. The internet is not broken. Maybe arguably the _web_ is broken, and specifically, web pages are broken (the HTTP protocol is still a wonderful thing).
I wish technical authors would stop making the internet-is-broken meme when they really mean the web is broken. Sure, there's plenty broken with the internet in other ways (DNS, encryption, governments, ddos's, etc.), but let's not make the mistake that the internet equals the web.
[+] [-] dusted|3 years ago|reply
First class netizens are publicly routable nodes with statically allocated IP addresses. These have real, honest Internet connections, they get to participate in the global community of humans and their beloved machined.
The second class is like the first but the service provider has imposed restrictions on their ability to communicate freely with other machines, such as blocking any packets they may send on specific protocols and ports, notably, TCP port 25, meaning these people cannot send their own email.
The third class is like the second, but they do not have statically allocated IP addresses, they therefore cannot reliably and consistently participate in the global community and must jump some amount of hoops to even participate in a limited way.
There is a hidden fourth class too, these cannot be considered as having a connection to the Internet, they cannot participate, only passively consume existing services, their machines are not connected to the internet, but another network, and their service-provider will just barely allow them to make requests to services on the real Internet, but it will not route any new connections to them, they are cut off and isolated, the silent majority. And this is where the brokenness of the Internet truly shines. This mode of connection should be illegal, and hiding yourself in this way should be an active choice on the part of the individual, not imposed by their "service provider".
[+] [-] eimrine|3 years ago|reply
> (the HTTP protocol is still a wonderful thing).
But almost all websites are requiring an HTTPS and you know what? I have a dozen of dumbphones who can not access webpages because HTTPS is no longer supported by vendor of device or absolutely can not access https because handshake process takes too long time for gprs (if no edge). If HTTP is so wonderful than why I can not use it without all the stupid security?
[+] [-] mattigames|3 years ago|reply
[+] [-] paskozdilar|3 years ago|reply
[0] https://www.gnunet.org/en/
[+] [-] 123pie123|3 years ago|reply
With BGP it's just about limping along with trust
https://www.bleepingcomputer.com/news/security/major-bgp-lea...
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] eyelidlessness|3 years ago|reply
> That said, browsers that predate CSS do not know what to do with <style> tags, and as a result simply print the styles out at the top of the page. It's pretty ugly and, depending on how much CSS you have, borderline unusable.
> […]
> Forgotten Link Tags
I know external vs inline CSS is pretty much never going to be an answered question, but I feel like there’s a missed opportunity here. In particular because
- External CSS by link element addresses this much better for the backwards-compatible extremes: why even serve the contents of the CSS at all if the browser will just treat it as a comment? That’s just precious bandwidth wasted.
- Taken to extremes, external CSS relaxes the use CSS sparingly principle if you strategically break up the link tags to align well with likelihood of support/value ratio. Using media attributes to split CSS payloads can improve UX for old/underpowered mobile devices and even desktops/laptops with lower resolution screens. Detecting support for (ahem) @supports can open all kinds of CSS minimization doors for newer but lower power devices. Segmenting stylesheets by capabilities and preferences allows compatibility improvements regardless of point in time!
And a minor nit: if you’re dithering images for file size it’s worth comparing against a non-dithered version with a small color palette. The latter might actually be better for everyone!
Oof and that prompted a larger one: you can improve UX for ~every real visitor with <picture> tag and more efficient formats, and you could also sorta skirt the no-JS rule and upgrade pre-picture browsers to PNG with a simple inline onerror.
[+] [-] zachflower|3 years ago|reply
I’ve been living in backend-land for so long I’ve never actually used the <picture> tag. I will have to take a stab at it and see how the legacy browsers treat it, because if I don’t have to use GIFs, I won’t.
As for the @media tags, I do utilize them to a degree, just to make everything render nicely on mobile and to support dark-mode. But (to put it cheekily) I’m more concerned with backwards compatibility than forwards compatibility :p
[+] [-] masswerk|3 years ago|reply
(Since any extensions had this problem, when they were introduced, they all support enclosing HTML comments. Otherwise, there wouldn't have been any chance for becoming broadly adopted.)
Regarding dithering: I recall a few sites with duotone dithering, which was a nice effect. And, of course, to bring down file sizes and optimize results, you could always manually compose from dithered and undithered images using the same palette.
[+] [-] layer8|3 years ago|reply
The TFA failed to mention websafe colors. I still have that poster hanging on the wall: http://www.visibone.com/color/poster4x.html
[+] [-] amiga-workbench|3 years ago|reply
[+] [-] RcouF1uZ4gsC|3 years ago|reply
[+] [-] pabs3|3 years ago|reply
[+] [-] zachflower|3 years ago|reply
Thanks for the tip!
[+] [-] layer8|3 years ago|reply
[+] [-] jefftk|3 years ago|reply
(It might be a bit tricky to find an HTTPS configuration that supports for both modern and extremely old browsers, but it should be possible)
[+] [-] 0xbadcafebee|3 years ago|reply
For the majority of users, man-in-the-middle attacks (by someone other than your ISP) will never be an issue. It's mostly a theoretical problem. Your connection at home (and your laptop) is as safe as your Wifi connection. Your mobile connection is probably more secure. And there is no hacker sitting in your coffee shop waiting to p0wn your connection to Facebook or send you a 0day. HTTPS is necessary for the whole world to trust e-commerce, but saying everything has to be encrypted is ridiculous.
The most likely MitM anyone will ever experience is DNS cache poisoning, and that's pretty rare.
[+] [-] GuB-42|3 years ago|reply
And I don't believe it is possible to have an HTTPS configuration that suits both old and new browsers since new browsers regularly deprecate older versions of SSL/TLS. I think that anything less than TLS 1.2 is deprecated in many browsers now. and TLS 1.2 is from 2008, way too modern for a website focused on compatibility.
For compatibility, HTTP is your only choice, secure protocols are likely to deprecate regularly as new vulnerabilities are found and stronger protocols are made.
[+] [-] forgotmypw17|3 years ago|reply
There are applications where you want maximum security (e.g. banking) and there are others where it is not only not necessary, but also a hindrance (ART, for example)
[+] [-] Wowfunhappy|3 years ago|reply
[+] [-] xdennis|3 years ago|reply
Last year I traveled to my parents house and booted up my old computer which I haven't used from 2014. It was using Ubuntu 12.04, I think.
I could barely browse any websites in Firefox, because of TLS issues.
[+] [-] westcort|3 years ago|reply
1. When I say "as backwards compatible as possible," I mean that this website will be usable on as many browsers, connections, and hardware as I can reasonably support
2. Raw HTML is Your Friend Remember the <FONT> tag? And <TABLE> based layouts? What about <CENTER>? I do
3. I won't go into the technical details here (Low Tech Magazine does a far better job explaining it than I can), but the thing to know is that dithering allows you to reduce the filesize of your images by reducing the amount of information it takes to render them
4. OldWeb.Today The most frequently used tool in my arsenal, oldweb.today is a website that allows you to emulate a number of different retro browsers (from NCSA Mosaic to Netscape Navigator) from directly within your own browser
I practice a lot of these things on my website locserendipity.com, and the associated search engine, which is the only one that you can download with a working index, albeit it is limited to only around a million entries.
This experimental search engine indexes all of the .edu pages on DMOZ circa 2010. It also has a local index: https://locserendipity.com/edu.html?q=amateur%20radio
[+] [-] rvieira|3 years ago|reply
So far it's pretty much usable in IE6 and Mosaic 2 (1993).
[+] [-] ulzeraj|3 years ago|reply
I used to love Caddy. Not so after the move to 2.0 when they got the json formatted config thing going on. Of course there is still Caddyfile but most of the time I'm bashing my head on their json focused documentation page to find out how can I do that same thing from a config file. I'm pretty sure there might be somewhere but I don't want to read a book I just need a nice quick web server that runs from a single binary like v1 did.
[+] [-] mid-kid|3 years ago|reply
[+] [-] adamomada|3 years ago|reply
> My daily driver is a mid-2012 MacBook Pro that will stop receiving all updates (security included) from Apple by the end of this year. While I personally intend to keep it alive with Linux, this isn't a path that is readily available to most people. What are most people supposed to do in this situation to save some money and avoid adding more junk to our landfills?
I’m in the same boat with the same machine. Have you heard of Open Core Legacy patcher? The 2012 machine is apparently the cut-off for ‘everything works’ but I’m really curious about the definition of ‘works’. Also, the security implications of using it, of course.
(I’m assuming OP is the author)
[0] https://dortania.github.io/OpenCore-Legacy-Patcher/
[+] [-] folli|3 years ago|reply
[+] [-] viamedia|3 years ago|reply
If someone is using a screen reader, low-contrast/high-contrast setting, translator - plain txt simply works. For more intricate content like schematics, equations, offer a link to PDF.
I am not sure if the lack of mention of Lynx was an oversight.
[+] [-] fjfaase|3 years ago|reply
[+] [-] projektfu|3 years ago|reply
[+] [-] jhoechtl|3 years ago|reply
Again and again: It's not the internet broken, its only HTML plus the required interplay of a bazillion of specs.
[+] [-] ajsnigrutin|3 years ago|reply
Almost all "other" protocols, have been replaced by HTML over HTTP. (from chat applications, web interfaces for mail, forums instead of newsgroups, etc.)
[+] [-] throwaway290|3 years ago|reply
[+] [-] minroot|3 years ago|reply
[+] [-] adamddev1|3 years ago|reply
[+] [-] pingiun|3 years ago|reply
[+] [-] taftster|3 years ago|reply