I don't care about how big a page is. I care how fast it loads.
And some of these pages fail that test badly. For example, sdf.org takes a whopping 1.60 seconds for the home page GET request to return a single byte.
I'd like to see a new leaderboard of 'instant' pages. An instant page is one where a typical user click has the result fully rendered inside 100 milliseconds (almost imperceptible latency).
An instant page is one where a typical user click has the result fully rendered inside 100 milliseconds (almost imperceptible latency).
That's an incredibly hard to reach target. Establishing a secure connection from the browser to a server requires several round trips - 1 for DNS lookup, 1 for TCP handshake, 2 to establish the TLS connection, and 1 for the HTTP request. If you divide your 100ms perf budget equally between them that means your servers can be no more than 1800 miles away from the user using (100ms c) / 2 (for the round trip) / 5 requests*. Using TLS 1.3 removes one round trip which is definitely a good optimization. That's before you've even started any serverside rendering, session handling, db connections, etc.
Even with a really good edge network that's going to be incredibly hard to achieve.
I'd love it if every website served a page in 100ms, but it's never going to happen. I got https://ooer.com down to below 500ms for myself (495ms last time I checked) after a fair bit of effect. Getting any lower would be a waste of time. Users can wait an extra 400ms.
I'd rather have a page load in 2 seconds and be done with than one that shows up in 500 millis and keeps loading advertising shit in the background for 10 seconds. Page size is merely a hint about the level of counter-user sophistication, the lower the better.
This seems to be really hard to do unless you're using global cdn and forgoing https. Ping between US and SG is almost 300ms in the best condition. That alone already blew the 100ms budget for visitors far away from your server location. And then we have https, where initial https handshake alone could take several hundreds ms (depending on various reasons).
It seems like every dev I know has the idea of making a minimalist website. I guess I would much rather see a curated list of beautiful (in some unique way) websites than another 1000 minimalist websites.
> The website must either be very noteworthy or some content from the website must have received at least 100 points on Reddit or Hacker News on at least one occasion.
It's great we now have an elite (10kb club) of the elite (100+ points on a link curator). I don't think this is particularly useful for discovery.
My new club, the 5kb club, requires at least 200 internet points, to be a member of the 10kb club, have met with the Queen before she passed and be under 5kb.
Mine has 6kB of raw text, making it under 10kB would be difficult. Unless I split it into multiple pages artificially, one per project rather than the collapsible sections I have now.
Still, those 6kB of text turn into 44kB of HTML with the markup, I should probably do something about that.
But all of them need to have at least one article with 100 upvotes on Hacker News or Reddit, so it's probable at least some of them are worth visiting.
As soon as you include images and custom fonts it becomes really hard to keep things small.
I just measured my landing page which looks like every other SaaS landing page out there and it's 12KB of HTML/CSS and 202KB of illustrations and fonts (4*35KB illustrations, 2*25KB fonts).
If you compile your webpages, there are tools to remove from fonts any unicode codepoint and weights and ligatures you don't use. Many fonts get 90% smaller. For fonts only displayed small, you can also simplify curves to get further smaller.
I think having custom fonts disqualify you from the intent of this (avoiding extra bloat that has (near) zero impact on the content/value of the page).
Maybe not quite in the 10KB club, but Drudge Report and Craigslist both still look like 1998 high school computer club projects and load almost instantly.
That website currently weighs 9.50 kB. Hence, it cannot add too many more sites (without removing some other bits) before hitting its own arbitrary size limit.
My home page could get on this site if I reduce the amount of CSS, but I already have it remove unused items from the CSS. Eh, fun project for another day.
My assumption is that the list was automatically generated and contained many pages that met the aize criteria but were error messages, place holders, or other uninteresting things like a curse word repeating down the page, or worse, tiny POCs for CVEs
These clubs remind me of this tweet (as someone that has always been the perf guy wherever I worked):
---
Also, hot take: users care way less about performance than you think. They want "fast enough", but we're over-indexing on "as fast as possible" instead of caring about other things that matter more to users.
I have seen very little code that is written to be "as fast as possible" and a lot of code that aims to be "fast enough" and fails to meet that bar. Perhaps still worth being mindful about it?
Solid research on this exists, so there is not much to deny, at least when your goal is selling to the average user.
But: where is this magical place where people put heavy emphasis on performance? It certainly isn't here, and I would like to visit it for once. Companies caring, or giving their programmer the time to care, about performance are absolutely a rare sight. Unless they work in areas in which they are forced to care about it, and even then many stick to the minimum.
Snappy, low latency software is such a delight to encounter because it is just so damn rare. Especially on the web, I constantly feel like I'm wading through molasses, and the only reason I am able to endure this constant, agonizing pain is that I've gotten so used to it...
A lot of people worry about super-scale prematurely, thinking that "fast enough" for each individual user means needing to be as far as possible because your have a great many users making concurrent requests.
Somewhat counter-intuitively to some: this way if thinking can actually make things needlessly less responsive for the individual users because of the "over-architecting" it can cause.
I agree completely regarding "users want fast enough".
Also: for regular sites, html/css/js optimization is less important than server location. If your server is in Europe and your user is in the US, that's the big one, not your HTML, CSS or JS.
And if you're fetishizing over Lighthouse scores, stop. It's only a very rough measure and shouldn't be treated as a goal itself.
That fast enough is good enough shouldn't be a hot take?
But if you are implying that the web today is fast enough that for sure is a hot take. Our industry most certainly don't care the slightest about performance nor user experience.
[+] [-] londons_explore|3 years ago|reply
And some of these pages fail that test badly. For example, sdf.org takes a whopping 1.60 seconds for the home page GET request to return a single byte.
I'd like to see a new leaderboard of 'instant' pages. An instant page is one where a typical user click has the result fully rendered inside 100 milliseconds (almost imperceptible latency).
[+] [-] onion2k|3 years ago|reply
That's an incredibly hard to reach target. Establishing a secure connection from the browser to a server requires several round trips - 1 for DNS lookup, 1 for TCP handshake, 2 to establish the TLS connection, and 1 for the HTTP request. If you divide your 100ms perf budget equally between them that means your servers can be no more than 1800 miles away from the user using (100ms c) / 2 (for the round trip) / 5 requests*. Using TLS 1.3 removes one round trip which is definitely a good optimization. That's before you've even started any serverside rendering, session handling, db connections, etc.
Even with a really good edge network that's going to be incredibly hard to achieve.
I'd love it if every website served a page in 100ms, but it's never going to happen. I got https://ooer.com down to below 500ms for myself (495ms last time I checked) after a fair bit of effect. Getting any lower would be a waste of time. Users can wait an extra 400ms.
[+] [-] speed_spread|3 years ago|reply
[+] [-] kingofpandora|3 years ago|reply
[+] [-] neurostimulant|3 years ago|reply
[+] [-] serf|3 years ago|reply
you should care about both, not everyone has unlimited data capacity.
[+] [-] donohoe|3 years ago|reply
Small sizes at extreme cost to UX is not worth it.
[+] [-] andai|3 years ago|reply
[+] [-] bugfix-66|3 years ago|reply
Can you point to a specific problem?
[+] [-] simonsarris|3 years ago|reply
[+] [-] danuker|3 years ago|reply
[+] [-] bArray|3 years ago|reply
It's great we now have an elite (10kb club) of the elite (100+ points on a link curator). I don't think this is particularly useful for discovery.
My new club, the 5kb club, requires at least 200 internet points, to be a member of the 10kb club, have met with the Queen before she passed and be under 5kb.
[+] [-] antirez|3 years ago|reply
[+] [-] endofreach|3 years ago|reply
[+] [-] remram|3 years ago|reply
Still, those 6kB of text turn into 44kB of HTML with the markup, I should probably do something about that.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] saagarjha|3 years ago|reply
[+] [-] codazoda|3 years ago|reply
It looks like GitHub includes normalize.css, which is 12kB all by itself. Damn it.
Next I looked at the css framework I built and use for all my sites. I know it's only about 2.5k, but I have a screenshot. 114kB transferred. Damn it.
I'm not in this club even though I try to be tiny.
[+] [-] giancarlostoro|3 years ago|reply
[+] [-] tasuki|3 years ago|reply
What?
> I know it's only about 2.5k, but I have a screenshot. 114kB transferred.
What?
[+] [-] jacknews|3 years ago|reply
[+] [-] taxman22|3 years ago|reply
[+] [-] CynicusRex|3 years ago|reply
PS I am biased because mine is on there.
[+] [-] tyingq|3 years ago|reply
[+] [-] ad404b8a372f2b9|3 years ago|reply
I just measured my landing page which looks like every other SaaS landing page out there and it's 12KB of HTML/CSS and 202KB of illustrations and fonts (4*35KB illustrations, 2*25KB fonts).
[+] [-] londons_explore|3 years ago|reply
[+] [-] tasuki|3 years ago|reply
That's reasonable. I don't care whether a website is 10kb or 200kb, I mind if it's 20mb of things which aren't even necessary.
[+] [-] Ferret7446|3 years ago|reply
[+] [-] JJMcJ|3 years ago|reply
Actually Hacker News is quick as well.
[+] [-] uallo|3 years ago|reply
[+] [-] rbonvall|3 years ago|reply
[+] [-] rnestler|3 years ago|reply
[+] [-] ghoward|3 years ago|reply
[+] [-] woofyman|3 years ago|reply
[+] [-] galangalalgol|3 years ago|reply
[+] [-] metadat|3 years ago|reply
https://news.ycombinator.com/from?site=10kbclub.com
[+] [-] yoz-y|3 years ago|reply
[+] [-] rozenmd|3 years ago|reply
---
Also, hot take: users care way less about performance than you think. They want "fast enough", but we're over-indexing on "as fast as possible" instead of caring about other things that matter more to users.
Source: https://mobile.twitter.com/DavidKPiano/status/15787403709971...
[+] [-] saagarjha|3 years ago|reply
[+] [-] 2pEXgD0fZ5cF|3 years ago|reply
Solid research on this exists, so there is not much to deny, at least when your goal is selling to the average user.
But: where is this magical place where people put heavy emphasis on performance? It certainly isn't here, and I would like to visit it for once. Companies caring, or giving their programmer the time to care, about performance are absolutely a rare sight. Unless they work in areas in which they are forced to care about it, and even then many stick to the minimum.
Snappy, low latency software is such a delight to encounter because it is just so damn rare. Especially on the web, I constantly feel like I'm wading through molasses, and the only reason I am able to endure this constant, agonizing pain is that I've gotten so used to it...
[+] [-] dspillett|3 years ago|reply
Somewhat counter-intuitively to some: this way if thinking can actually make things needlessly less responsive for the individual users because of the "over-architecting" it can cause.
[+] [-] luckylion|3 years ago|reply
Also: for regular sites, html/css/js optimization is less important than server location. If your server is in Europe and your user is in the US, that's the big one, not your HTML, CSS or JS.
And if you're fetishizing over Lighthouse scores, stop. It's only a very rough measure and shouldn't be treated as a goal itself.
[+] [-] tjoff|3 years ago|reply
But if you are implying that the web today is fast enough that for sure is a hot take. Our industry most certainly don't care the slightest about performance nor user experience.
Popups alone are proof of that.
[+] [-] anderspitman|3 years ago|reply
[+] [-] abruzzi|3 years ago|reply
[+] [-] agumonkey|3 years ago|reply
this is something we should reward, less chrome, more ideas and expression (unless said chrome is also partly that)