I really like the minimalism of these. I guess the interesting part of the linked article was the guy wanted to make his blog look exactly the same as it was before he made the optimisations
> This is a nice example of ‘premature optimization’ but I do hope that the users of the blog like the end result.
Is it really premature? You already had a blog that worked, saw a need (or challenged yourself?), and acted. Sure, this might have been a bit overboard, but I think that it was the right time for this optimization (not premature).
> Optimizing a thing like this is likely a bad investment in time but it is hard to stop doing a thing like this if you’re enjoying it and I really liked the feeling of seeing the numbers improve and the wait time go down.
You did a good job. And I like seeing those numbers go down, too!
It's the only page you can read on a GPRS connection (http) and you can literally see every packet as it's transmitted, because the page is rendered bit by bit.
Is there a way to slow it down on fast connections and or a video of this? I have an academic interest and am curious how they did this, but don't have an easy way to demonstrate it to myself as I don't currently have a connection slow enough that it doesn't render instantaneously.
This is a worthy goal for any site you build, not just a blog... that said, it's not even that hard to get crazy bloated CMSes like Drupal, Wordpress, etc. as fast (or faster), as long as you set up basic caching (e.g. Nginx, Varnish, CloudFlare, etc.).
I have a basic-ish theme, and I use Drupal's AdvAgg module to aggregate and minify JS and CSS, as well as a few other tricks to get page loads smaller. Finally, I use Nginx's dead-simple proxy cache to make most page loads take < 600 ms (and faster, if you're near NYC, where my DO Droplet is located).
Obviously, the more images and other elements (e.g. social embeds, analytics and other junk), the more time spent downloading the page.
But, IMO, there's no excuse for your personal blog to take more than 1s to render and fully deliver a page, with an exception if you embed videos/audio (e.g. podcasters).
It's not just downloading. Websites come with css (not that big of a problem) and JS (big problem). All that has to be processed by the browser. I found that websites sometimes pull css and js bundles (as in whole ones, even over 20kB ones), even from some other servers. Just the other day i tried a website on my aging smartphone and it sometimes took over 20 sec just to write a character (one) in the search thing.
The average website on teh internets, in my opinion/experience, is bloated to hell. And i'm not talking about functionality here.
> Finally, I use Nginx's dead-simple proxy cache to make most page loads take < 600 ms
600 ms is appalling. Most site should be able to get under 50 ms easy (ignoring internet latency). A blog should be less than 25-30 ms, even if it has to hit the database.
Apposite, I'm working on a personal WP theme currently. Got HTML, CSS and JS down to under 20k so far for a reasonably presentable responsive front page, less on the wire gzipped. CSS and JS minified. Experimenting with inlining "critical" CSS, ie "above fold" content. JS is for parallel loading of CSS to prevent delays in rendering, so content displays before styles are applied (may reverse this decision). This means the critical CSS has to focus on layout, font sizes etc to minimise that awful reshuffling that happens when styling is applied.
It's turning out to be quite a fun challenge, reminiscent of early web days when I'd muck about trying to shave 2k off a JPEG.
Hopefully notions such as "critical" CSS and inlining to reduce requests might encourage the re-emergence of lean yet visually pleasant websites.
CMS's are designed to be behind a caching layer like Varnish. a CMS takes in user input and renders to html, which should only change in-page on ajax requests. I really wish there were more public configs for Varnish, some things like Mediawiki are impossible to find well-written and documented configs for, and creating one yourself can have a lot of pitfalls. There's four versions of Varnish, so that too makes it a PITA.
BTW: What causes this 11 points + 4 comments thing to hit the front-page of HN? Just wondering, as I believed it's the number of comments + popularity that pushes the link to the front.
Also the rate at which it accumulates points. For example, if a submission was submitted an hour ago, and in the last 5 minutes it got 4 more points, it will probably hit the front page.
In the past comments actually counted against submissions, in order to avoid flame wars. It seemed actually impossible for a submission with more comments than points to hit the front page, no matter how many points it had. Conspiracy theorists claimed this was a common way to bury a submission. This has been relaxed somewhat, but I still wouldn't expect comments to help a submission.
I think the reputation of who clicked it also matters even though it isn't publicly said. If a few people with a lot of upvoted got to it, it could hit the front page fast
I completely agree with the basic sentiment of this article. Far too many sites lead with massive images and huge javascript frameworks just to serve what could be a few kilobytes of text.
That said, I did not go as far as this author when designing my personal blog[1], I considered the following tradeoffs worth the slight cost:
* I didn't inline images or CSS. I can see the appeal but I don't believe it is really worth it unless you have relatively small amounts of CSS. In theory HTTP2 is supposed to help here as well, and on really slow connections inlining can slow things down as the browser is forced to download the inlined stuff instead of progressively displaying the page as it can.
* I ended up deciding that the custom font I wanted to use was worth the cost. I thought hard about it though and would perhaps decide against it if I was designing the site again.
* You can drive yourself insane trying to minimize traffic for images. Should you try to serve 2x images for retina displays? Small images for mobile devices? In the end I just serve the same images for everybody and minimize the use of images overall. It works for me because I don't have a lot of need for splashy pictures.
* I avoided any type of social media button or plugin, they tend to make additional requests back to the mothership. Very few people actually liked or +1ed anything on my old blog anyway, but people with better blogs might find the trade-off worth it.
I don't inline CSS or images either, but I did give them a long expiry time (a year). The first hit to my blog [1] might not be that fast, but subsequent hits should be (it's mostly text anyway). The CSS file has a unique name, and when I change it (it doesn't happen that often (last time was May 2015) it gets a new filename. Also, I serve no Javascript.
I do have one external bit---a block pointing to my Amazon affiliate account (which might have Javascript, I don't know, probably does). It's disabled for mobile devices (via CSS---I use CSS to change the layout to make it more mobile friendly).
I recently did a optimization pass at my own blog[0], and came to the same conclusions.
I thought a lot about images, since I have a lot of game screenshots. I decided that since my redesigned homepage will only show summaries of a post with the first image, that my original rule of images no bigger than 80k was good enough. It makes the homepage about 500k on a uncached first hit, but I figure that's more useful than 500k of CSS and JS.
Not only did I decide that my custom fonts were worth it, I started serving them in the smaller WOFF2 format, instead of (original) WOFF. Once I looked at the browser support, it was a no-brainer.[1]
I'm also kind of stingy about browser requests. I've gotten it to no more than 10 (homepage; individual articles are less). If a resource is fairly small, it might not be worth the wait for the browser to open a connection and download it. All my fonts are base64 inlined into my CSS. Sure, it makes the CSS bigger, but the extra request is gone, and after gzip compression, it's almost the same size. I used Font Squirrel to eliminate unused glyphs from the fonts I use.[2]
One thing I noticed about your CSS is that it has a enormous textual redundancy, but it's not gzip compressed. You can get an easy speed up by enabling it there.
The joy of making a site like this isn't just the speed, it's that you can continually remove and simplify things without making the experience worse. I suppose that's the definition of minimalism?
It means that when you _do_ add an image, or some JavaScript, you're doing it because it demonstrably adds something of considered value, not just because it's easy.
As wonderful as it is that we have complex tools avaibale for complex use cases: let's keep simple things simple. For example, for something as simple as a mobile dictionary web app, you don't even need a framework. Just look at this web app, it loads faster than HN, practically instantly, and it still looks sleek: http://m.dict.cc/
Take a look at the JavaScript, it is so beautifully anti-best-practices! No framework, global namespace pollution, whatever: it just works!
Speed and readability is one of the main reasons why I'm still sticking to RSS when it comes to reading blogs. Avoids most bloat issues and I don't really care about anyone's favorite colors and web fonts.
> A blog is really fast if you don't put anything but text in it basically
Here's a dummy test page I made a while ago to see if I could create a fairly lengthy, fast-loading text page for slow mobile connections. It's hosted on a cheap shared hosting plan, so it may well fall over (or not!)
The image at the top of the page hasn't been optimized (about 40kb), however I do think aesthetics are important in page design and I'm against reverting to a plain HTML look with no CSS styling. The test pages above are plain looking but, I hope, reasonably pleasant to look at. (The custom font version looks nicer in my view than the no font loading version, but of course it adds a bit of extra page weight).
Text, and some basic CSS to create a nice, readable style. What also helps is developing the CSS/HTML so that the reader mode in browsers can be triggered and used to read your content - now users can essentially make the experience fit their own requirements using a built in browser feature.
He should revise his Apache configuration, because there's definitely something wrong there. First request is taking twice as long as second one's:
bayesian-goat:CreditScoreIcons heyoo$ httping http://jacquesmattheij.com/the-fastest-blog-in-the-world
PING jacquesmattheij.com:80 (/the-fastest-blog-in-the-world):
connected to 62.129.133.242:80 (329 bytes), seq=0 time=1023.28 ms
connected to 62.129.133.242:80 (329 bytes), seq=1 time=554.06 ms
connected to 62.129.133.242:80 (329 bytes), seq=2 time=555.50 ms
^CGot signal 2
--- http://jacquesmattheij.com/the-fastest-blog-in-the-world ping statistics ---
3 connects, 3 ok, 0.00% failed, time 5052ms
round-trip min/avg/max = 554.1/710.9/1023.3 ms
bayesian-goat:CreditScoreIcons heyoo$ httpstat http://jacquesmattheij.com/the-fastest-blog-in-the-world
Connected to 62.129.133.242:80 from 192.168.1.1:53437
DNS Lookup TCP Connection Server Processing Content Transfer
[ 521ms | 344ms | 278ms | 559ms ]
| | | |
namelookup:521ms | | |
connect:865ms | |
starttransfer:1143ms |
total:1702ms
Edit: Noticed interesting things about bettermotherfuckingwebsite.com(AmazonS3, Content-Length: 1943) and motherfuckingwebsite.com(nginx/1.10.3, Content-Length: 5108) - the Content Transfer part on those two only takes 1ms! Meanwhile dadgum.com has Content-Length: 9344 and transfer takes 162ms. Anyone got ideas why the massive difference?
Ome thing worth considering here is caching. For elements common to a whole site (like web fonts and stylesheets), the initial download might be big, but subsequent downloads won't need to happen, since the browser already has a copy. It still ain't an excuse to load dozens of WOFFs, but it's enough to make the hit a lot less severe for those who've already visited your site.
Hacker: How dare you make me enable JS to view your website
Hacker: It's simple: clone the repo, install gcc, then dependencies, open command prompt, compile, now you can do the same thing as Microsoft Word, well kind of.
[This comment is 99% joke; still might be useful to look at the community aesthetic from an outside perspective]
I enjoyed this post the last time it came up[0] and learnt a few tips from it. Particularly interesting was the difference making CSS inline made, even for reasonably large amounts of CSS.
Well you include the "Follow" Twitter button twice. Once in the side bar and once at the end of the blog. You can also PNGCrush your favicon.png to save 58 bytes. I'm sure I could spot a few more minor savings if I looked, not including minification and cleaning up the CSS, since those were already mentioned.
I highly recommend the advice of Heydon Pickering [0]. The best optimizations can be made by not writing code.
If I disabled the custom font I'm using (87.7KB) the home page of my "blog" [1] comes in at ~1463 bytes. 802 bytes of which is the CSS, leaving under 1kb of HTML per post once the CSS hits cache. It would have an average load time of ~45ms.
> Imagine an envelope for a letter that weighed a couple of pounds for a 1 gram letter!
Sounds like the licenses we receive from Cisco. They are literally an A5 sheet of (thin) paper packaged, 3 boxes deep, in something easily the size of a shoe box.
[+] [-] djhworld|9 years ago|reply
I really like the minimalism of these. I guess the interesting part of the linked article was the guy wanted to make his blog look exactly the same as it was before he made the optimisations
[+] [-] discreditable|9 years ago|reply
[+] [-] dredmorbius|9 years ago|reply
[+] [-] qznc|9 years ago|reply
I also recently made a minimalistic news aggregator for german news: http://textnews.neocities.org
[+] [-] wlkr|9 years ago|reply
[+] [-] boostedsignal|9 years ago|reply
[+] [-] theandrewbailey|9 years ago|reply
Is it really premature? You already had a blog that worked, saw a need (or challenged yourself?), and acted. Sure, this might have been a bit overboard, but I think that it was the right time for this optimization (not premature).
> Optimizing a thing like this is likely a bad investment in time but it is hard to stop doing a thing like this if you’re enjoying it and I really liked the feeling of seeing the numbers improve and the wait time go down.
You did a good job. And I like seeing those numbers go down, too!
[+] [-] hanikesn|9 years ago|reply
It's the only page you can read on a GPRS connection (http) and you can literally see every packet as it's transmitted, because the page is rendered bit by bit.
[+] [-] sbierwagen|9 years ago|reply
[+] [-] tristor|9 years ago|reply
[+] [-] dualogy|9 years ago|reply
[+] [-] snackai|9 years ago|reply
[+] [-] igk|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] geerlingguy|9 years ago|reply
I have a basic-ish theme, and I use Drupal's AdvAgg module to aggregate and minify JS and CSS, as well as a few other tricks to get page loads smaller. Finally, I use Nginx's dead-simple proxy cache to make most page loads take < 600 ms (and faster, if you're near NYC, where my DO Droplet is located).
See, for example: https://www.jeffgeerling.com/blog/2017/tips-managing-drupal-... (~600 ms from STL, MO, USA).
Obviously, the more images and other elements (e.g. social embeds, analytics and other junk), the more time spent downloading the page.
But, IMO, there's no excuse for your personal blog to take more than 1s to render and fully deliver a page, with an exception if you embed videos/audio (e.g. podcasters).
[+] [-] gens|9 years ago|reply
The average website on teh internets, in my opinion/experience, is bloated to hell. And i'm not talking about functionality here.
[+] [-] flukus|9 years ago|reply
600 ms is appalling. Most site should be able to get under 50 ms easy (ignoring internet latency). A blog should be less than 25-30 ms, even if it has to hit the database.
[+] [-] austinjp|9 years ago|reply
It's turning out to be quite a fun challenge, reminiscent of early web days when I'd muck about trying to shave 2k off a JPEG.
Hopefully notions such as "critical" CSS and inlining to reduce requests might encourage the re-emergence of lean yet visually pleasant websites.
[+] [-] devwastaken|9 years ago|reply
[+] [-] avn2109|9 years ago|reply
[+] [-] movedx|9 years ago|reply
[+] [-] wkoszek|9 years ago|reply
BTW: What causes this 11 points + 4 comments thing to hit the front-page of HN? Just wondering, as I believed it's the number of comments + popularity that pushes the link to the front.
[+] [-] amk_|9 years ago|reply
[+] [-] nashashmi|9 years ago|reply
[+] [-] jessaustin|9 years ago|reply
[+] [-] throwawaydbfif|9 years ago|reply
[+] [-] moron4hire|9 years ago|reply
[+] [-] AndrewStephens|9 years ago|reply
That said, I did not go as far as this author when designing my personal blog[1], I considered the following tradeoffs worth the slight cost:
* I didn't inline images or CSS. I can see the appeal but I don't believe it is really worth it unless you have relatively small amounts of CSS. In theory HTTP2 is supposed to help here as well, and on really slow connections inlining can slow things down as the browser is forced to download the inlined stuff instead of progressively displaying the page as it can.
* I ended up deciding that the custom font I wanted to use was worth the cost. I thought hard about it though and would perhaps decide against it if I was designing the site again.
* You can drive yourself insane trying to minimize traffic for images. Should you try to serve 2x images for retina displays? Small images for mobile devices? In the end I just serve the same images for everybody and minimize the use of images overall. It works for me because I don't have a lot of need for splashy pictures.
* I avoided any type of social media button or plugin, they tend to make additional requests back to the mothership. Very few people actually liked or +1ed anything on my old blog anyway, but people with better blogs might find the trade-off worth it.
[1] https://sheep.horse
[+] [-] spc476|9 years ago|reply
I do have one external bit---a block pointing to my Amazon affiliate account (which might have Javascript, I don't know, probably does). It's disabled for mobile devices (via CSS---I use CSS to change the layout to make it more mobile friendly).
[1] http://boston.conman.org/
[+] [-] theandrewbailey|9 years ago|reply
I thought a lot about images, since I have a lot of game screenshots. I decided that since my redesigned homepage will only show summaries of a post with the first image, that my original rule of images no bigger than 80k was good enough. It makes the homepage about 500k on a uncached first hit, but I figure that's more useful than 500k of CSS and JS.
Not only did I decide that my custom fonts were worth it, I started serving them in the smaller WOFF2 format, instead of (original) WOFF. Once I looked at the browser support, it was a no-brainer.[1]
I'm also kind of stingy about browser requests. I've gotten it to no more than 10 (homepage; individual articles are less). If a resource is fairly small, it might not be worth the wait for the browser to open a connection and download it. All my fonts are base64 inlined into my CSS. Sure, it makes the CSS bigger, but the extra request is gone, and after gzip compression, it's almost the same size. I used Font Squirrel to eliminate unused glyphs from the fonts I use.[2]
One thing I noticed about your CSS is that it has a enormous textual redundancy, but it's not gzip compressed. You can get an easy speed up by enabling it there.
[0] https://theandrewbailey.com/
[1] http://caniuse.com/#search=WOFF2
[2] https://www.fontsquirrel.com/tools/webfont-generator
[+] [-] helipad|9 years ago|reply
It means that when you _do_ add an image, or some JavaScript, you're doing it because it demonstrably adds something of considered value, not just because it's easy.
[+] [-] currysausage|9 years ago|reply
Take a look at the JavaScript, it is so beautifully anti-best-practices! No framework, global namespace pollution, whatever: it just works!
[+] [-] nashashmi|9 years ago|reply
These are solutions that do not belong in the front end of web development. Something else must work instead.
[+] [-] edwinyzh|9 years ago|reply
[+] [-] mhd|9 years ago|reply
[+] [-] tantalor|9 years ago|reply
Both sites score 100 / 100 on desktop & mobile on Google PageSpeed Insights.
[+] [-] FanaHOVA|9 years ago|reply
[+] [-] interfacesketch|9 years ago|reply
Here's a dummy test page I made a while ago to see if I could create a fairly lengthy, fast-loading text page for slow mobile connections. It's hosted on a cheap shared hosting plan, so it may well fall over (or not!)
Version A (no font loading): http://interfacesketch.com/test/energy-book-synopsis-a.html
Version B (loads custom fonts - an extra 40kb approx): http://interfacesketch.com/test/energy-book-synopsis-b.html
The image at the top of the page hasn't been optimized (about 40kb), however I do think aesthetics are important in page design and I'm against reverting to a plain HTML look with no CSS styling. The test pages above are plain looking but, I hope, reasonably pleasant to look at. (The custom font version looks nicer in my view than the no font loading version, but of course it adds a bit of extra page weight).
[+] [-] movedx|9 years ago|reply
[+] [-] coldtea|9 years ago|reply
And it seems it wasn't learned...
[+] [-] dzhiurgis|9 years ago|reply
[+] [-] yellowapple|9 years ago|reply
GZIP helps considerably here, too.
[+] [-] JustSomeNobody|9 years ago|reply
That's the money quote.
[+] [-] stillsut|9 years ago|reply
Hacker: It's simple: clone the repo, install gcc, then dependencies, open command prompt, compile, now you can do the same thing as Microsoft Word, well kind of.
[This comment is 99% joke; still might be useful to look at the community aesthetic from an outside perspective]
[+] [-] PuffinBlue|9 years ago|reply
https://tools.pingdom.com/#!/sNNVG/https://josharcher.uk/cod...
vs
https://tools.pingdom.com/#!/dSNHcL/http://jacquesmattheij.c...
I enjoyed this post the last time it came up[0] and learnt a few tips from it. Particularly interesting was the difference making CSS inline made, even for reasonably large amounts of CSS.
[0] https://news.ycombinator.com/item?id=9995529
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] Nadya|9 years ago|reply
I highly recommend the advice of Heydon Pickering [0]. The best optimizations can be made by not writing code.
If I disabled the custom font I'm using (87.7KB) the home page of my "blog" [1] comes in at ~1463 bytes. 802 bytes of which is the CSS, leaving under 1kb of HTML per post once the CSS hits cache. It would have an average load time of ~45ms.
[0] https://vimeo.com/190834530
[1] nadyanay.me
[+] [-] sandGorgon|9 years ago|reply
A Jekyll html theme that looks like style of Medium.com and uses Google AMP.
[+] [-] jen729w|9 years ago|reply
Sounds like the licenses we receive from Cisco. They are literally an A5 sheet of (thin) paper packaged, 3 boxes deep, in something easily the size of a shoe box.