top | item 29067118

How to build a low-tech website? (2018)

379 points| okasaki | 4 years ago |solar.lowtechmagazine.com | reply

216 comments

order
[+] KronisLV|4 years ago|reply
This bit really caught my attention:

> In contrast, this website runs on an off-the-grid solar power system with its own energy storage, and will go off-line during longer periods of cloudy weather. Less than 100% reliability is essential for the sustainability of an off-the-grid solar system, because above a certain threshold the fossil fuel energy used for producing and replacing the batteries is higher than the fossil fuel energy saved by the solar panels.

This, in particular, is an amazing take:

> Less than 100% reliability is essential for the sustainability of an off-the-grid solar system

Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal. Maybe a nice message at ingress/load balancer level, telling everyone that the site will be back in a bit.

I have no illusions that any company would want something like that, or that it's even feasible for many industries, e.g. healthcare, though the Latvian "e-health" system is an accidental experiment of this at the cost of around 15 million euros of taxpayer money: https://www-lsm-lv.translate.goog/raksts/zinas/latvija/par-e... .

However, if companies have working hours, why couldn't websites? What a funny thought.

Edit: on an unrelated note, it's wild how different setups can be.

For example:

> The web server uses between 1 and 2.5 watts of power...

While at the same time, i have two servers with 200GEs (essentially consumer hardware) running in my homelab, with a TDP of 35W each, though with the UPS and its inefficiencies, as well as the 4G router, the total energy consumption that i have to deal with is around 100W. Granted, i also use them for backups and as CI nodes, but there's still a difference of 20 - 50 times more power usage between one of my servers and one of theirs. I guess one can definitely talk about the differences between x86 and ARM there as well.

[+] yakubin|4 years ago|reply
One of my pet-peeves is how common it is for applications these days to assume you always have a reliable Internet connection. The shift to webapps makes the whole system really fragile. All it takes for an app to become unusable for everyone is a failure in just a couple nodes in the system.

I don't use GitHub for work, but it's always amusing to see the cries of people who do, when it does down from time to time. The ability of many companies around the world to continue operating depends on just one company continuously making the right calls.

In the future, when Ubisoft's servers go down, I won't be able to play Assassin's Creed games anymore, even though the servers don't really provide any value to me as a player.

On Spotify I keep downloading the same songs over and over again. Poisoning the environment more than if I got a CD. These days I try to buy songs to download once and then play in my favourite offline music player.

And of course SSH, the "secure shell", application which doesn't really optimize for shells. Responsiveness of typing things in the shell prompt relies on the quality of the Internet connection and the CPU load on the server machine - a process during which the server doesn't really have anything interesting to say.

I'm working on the side on creating a company and those opinions lead me to choose the harder path of desktop applications which don't need Internet connection for anything that doesn't require it by definition. They're more resilient and comfortable to use. I'll probably fail, but I really can't make any other choice at the moment, I need to at least try and see for myself.

[+] ClumsyPilot|4 years ago|reply
"Less than 100% reliability is essential for the sustainability of an off-the-grid solar system"

100% reliability is a terrible, dangerous lie.

People make mistakes.This is well and good when we are talking about entertainment and Instagram.

However we are now adding fragile Internet-connected code in critical infrastructure: cashless society means that if your bank is down, you can't get food.

All cycling stands in London need to talk to their server, and when it's down, you lost ability to get transport.

Same for public transport and Uber - if they fail in during a winter night, someone somewhere will freeze to death.

The door dialing in my house needs the internet to do IP-calling to your phone. They were digging up my street and cut the cable, and I was stuck in the cold for an hour.

This is going to spread - imagine a failure of the system managing running water, or god forbid the suer?

[+] gnur|4 years ago|reply
> Less than 100% reliability is essential

This is actually a take most SRE's would / should believe. Every added 9 to the reliability increases the price exponentially. Finding the correct level of reliability is something most companies should focus more on, because sometimes a single physical machine that could go down once a year for a few hours is perfectly capable of providing all the resources a medium seized business could need. Proper backups, monitoring and recovery runbooks can even decrease the downtime of such a simple system to minutes, while easily saving you maybe thousands per month.

[+] colechristensen|4 years ago|reply
B&H Photo website observed the sabbath and won’t sell you anything during.

There are things which are actually life critical that you want to be up and then there’s everything else, many people have really strange opinions about downtime and treat it like some kind of immorality.

I don’t really buy the power arguments about extra capacity but that’s a big thing.

[+] New_California|4 years ago|reply
I love this website and the experiment. But you implying that more service downtimes is somehow progress of civilization is ridiculous. Civilization progress needs more reliability, not less.
[+] jhgb|4 years ago|reply
> Less than 100% reliability is essential for the sustainability of an off-the-grid solar system, because above a certain threshold the fossil fuel energy used for producing and replacing the batteries is higher than the fossil fuel energy saved by the solar panels.

I wonder if whoever wrote this has actually done the calculations. These days it's not really difficult to provide fairly reliable solar power by overgenerating while still beating fossil fuels - perhaps not beating them as much as with zero overgeneration, but beating them anyway. So I'd take the quoted text with a considerably large grain of salt.

[+] ajsnigrutin|4 years ago|reply
> However, if companies have working hours, why couldn't websites? What a funny thought.

Our banks have those, and it's a really shitty experience.

Yeah, sure, e-bank works, you pay, but if the person receiving the money is at another bank, that my bank has no extra contract with, I have to wait until the next workday for the funds to be transfered. If I want to buy something with a bank transfer on a friday night, the funds won't get transfered until monday morning.

[+] nwsm|4 years ago|reply
> Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal. Maybe a nice message at ingress/load balancer level, telling everyone that the site will be back in a bit.

The difference is that here, the downtime saves energy. "Normal" downtime is unintentional. Servers are still running, just in an error state. They may be constantly attempting to restart themselves at some level (looking at you, k8s). Users are still trying to hit them and their requests may be partially handled. Traffic may be pushed over to newly spun up instances.

We can accept that things happen and running large internet services is hard, and SREs and developers everywhere will rejoice, but it won't save energy.

[+] yoz-y|4 years ago|reply
> However, if companies have working hours, why couldn't websites? What a funny thought.

Because of day/night cycle these working hours would usually overlap. This would put people who can’t afford to do their administrative business on the internet during work hours at a disadvantage.

[+] littlestymaar|4 years ago|reply
> Saying that sacrificing uptime is a good thing for achieving a particular goal is a take that's so wildly out of line with what most of the industry believes is refreshing, because it really makes you think. What would the world look like, if instead of being obsessed with SRE we simply assumed or accepted that there will be downtime, and treated it like something perfectly normal.

My bank's website has been down for maintenance for 24h this week-end. And French government website for declaring VAT will be down for a few hours tomorrow.

Many people still consider downtime as normal. It's not trendy to talk about it, but downtime isn't such a big deal. (Going down because you cannot handle your peak load is very bad though)

[+] bennyp101|4 years ago|reply
"However, if companies have working hours, why couldn't websites?"

There is a UK Gov website that only works during certain times (I think it's something to do with DVLA) - although I assume that is a technical thing, rather than it being powered by solar!

[+] rglullis|4 years ago|reply
> However, if companies have working hours, why couldn't websites?

B&H Photo famously goes offline every Shabbat and I think it is great. It goes to show that you can have an sustainable and profitable business without sacrificing on principles.

[+] Heneeque|4 years ago|reply
I just don't think it is a real issue.

It is much easier to have it from renewable sources as your DC doesn't move and the CO2 to social benefit is so huge.

Imagine giving up all mass CO2 producers, alone transport of people and goods but still being connected to everyone and sharing knowledge and learning etc.

I would like to see addicting things like reddit having office hours but being able to shop at night for something I need (like a new pair of pants after the ol d one ripped) is saving tons of CO2.

[+] ricardolopes|4 years ago|reply
OTOH with global replication, it should be possible to keep closer to 100% uptime running on solar. We would need continued cloudy weather globally to bring every solar powered server down at the same time.

Global replication could also give us more edge computing, which puts less load on global infrastructure (assuming good weather to use the nearest server), which in turn should be able to reduce resources consumption a bit more.

[+] toss1|4 years ago|reply
Yup, assumptions of less than 100% connectivity and designing for that would really change the world.

While I know everyone likes to hate on Lotus Notes here, once of it's key architecture features was that it assumed a seldom-connected and replicated data model - the clients and servers would contact each other at intervals to get updates, and otherwise everything was local.

Decades later, I still miss this functionality a lot - just force and update/replicate before you go somewhere (either office-home or a trip across the world), and everything is the same. No worries about connectivity quality; if it was choppy, just give it more time to handle the retries/ecc, and get the good updates. (of course, many system managers/admins/devs didn't really consider that much of a feature and treated it like any other network app, so not taking advantage of it, but those who understood it's power...)

I really wish others would think in such a seldom-connected model, as the network is becoming increasingly brittle, and working in that model was such a joy.

[+] masklinn|4 years ago|reply
FWIW the article is from 2018, here's the followup from 2020: https://solar.lowtechmagazine.com/2020/01/how-sustainable-is...

Notably wrt uptime:

> Uptime

> The solar powered website goes off-line when the weather is bad – but how often does that happen? For a period of about one year (351 days, from 12 December 2018 to 28 November 2019), we achieved an uptime of 95.26%. This means that we were off-line due to bad weather for 399 hours.

> If we ignore the last two months, our uptime was 98.2%, with a downtime of only 152 hours. Uptime plummeted to 80% during the last two months, when a software upgrade increased the energy use of the server. This knocked the website off-line for at least a few hours every night.

And while they found inefficiencies in their energy conversion system,

> One kilowatt-hour of solar generated electricity can thus serve almost 50,000 unique visitors, and one watt-hour of electricity can serve roughly 50 unique visitors.

[+] vageli|4 years ago|reply
> However, if companies have working hours, why couldn't websites? What a funny thought.

You may be surprised to hear that some websites _do_! I was trying to search for the page but can't find it at the moment, but I've experienced this with a Dept. of Homeland Security page that would only be available during certain hours. I believe it was due to some batch-processing related task but can't recall exactly now.

[+] ranger_danger|4 years ago|reply
In Japan many websites still go offline every night "for maintenance" for most of the night.
[+] delusional|4 years ago|reply
So the load balancers would still need 100% reliability? If they go down your browser can't connect to anything which would render you unable to send the "nice message".

I think we'd have to have a pretty different internet in this hypothetical world.

[+] 1-more|4 years ago|reply
> if companies have working hours, why couldn't websites

B&H Photo is offline on Saturdays to keep the Sabbath.

[+] tarsinge|4 years ago|reply
Indeed I’m now curious what would happen if e-commerce shops had business hours?
[+] rakoo|4 years ago|reply
> Less than 100% reliability is essential for the sustainability

Less than 100% reliability is going to be mandatory as part of the degrowth we'll need if we want to seriously tackle climate change. Less reliability, all the way towards offline-first, is the most sensible way to go

[+] tleb_|4 years ago|reply
As they describe being open to ideas and feedback: maybe using static compression could help. The concept is to pre-compress files (have foo.html.gz available next to foo.html) so that the web server does not have to compress on-the-fly. nginx supports it[0] and gzip_static does not appear in their nginx config[1].

It might not make a difference if nginx has a in-RAM cache of the compressed files. Otherwise, it has the potential to be pretty significant assuming most requests are made using "Accept-Encoding: gzip" and assuming the web server does not prefer other compression algorithms (eg. Brotli).

[0]: https://nginx.org/en/docs/http/ngx_http_gzip_static_module.h...

[1]: https://homebrewserver.club/low-tech-website-howto.html#full...

[+] bryans|4 years ago|reply
The amount of energy wasted due to poor caching strategies alone could power small countries. Made far worse by the current trend to maximize code readability and organization by using as many dependencies as humanly possible for even the simplest of tasks.
[+] onion2k|4 years ago|reply
Made far worse by the current trend to maximize code readability and organization by using as many dependencies as humanly possible for even the simplest of tasks.

The current trend is to maximize readability in development and then to use a build pipeline to remove as much of that readability as possible and optimize for size in production.

There is only a very loose relationship between what code looks like when you're writing it and what actually pops out of a compiler or bundler to go to production. You could write something close to plain English these days (literate programming with something like Observable), with a ton of dependencies, and still get a reasonably well-optimized website after a bundler does some tree-shaking, dead code removal, transpiling and minification.

It's complicated, sure, but your complaint here is the exact reason why it's complicated. We want to easy to write and understand human-optimized code for dev, and to use that to deliver an Internet- and browser-optimized bundle to the user. The step in the middle of those is complex.

[+] keiferski|4 years ago|reply
The section on images made me realize how redundant most blog post images are. Usually they are random generic images from Unsplash, Pexels, etc. and are only exist to add some color to the page. The specific image is mostly unimportant.

Would it be possible to implement a browser-level image library? Instead of a blogger using a generic Unsplash image, they just select one from the "Firefox Image Library", which is already on the user's computer. This library can be curated and optimized to keep the browser file download manageable.

Considering that images are often 70-80% of the page size, the savings would be significant.

[+] prox|4 years ago|reply
Most of the text on corporate pages are same-ish blahblah, we might as well add that to the Firefox Blurb Library :)
[+] aufhebung|4 years ago|reply
Have fun explaining to everyone why their browser now takes up 20GB.
[+] wgx|4 years ago|reply
I was able to squeeze some more optimisation out of it:

For example: 600px-A20-OLinuXino-LIME2.png is 34,689 bytes as served by the site.

Running it through PNGOUT, OxiPNG, AdvPNG and PNGCrush (all set to lossless) reduces the filesize by 6.8% down to 32,322 bytes with no visible diff to the image.

I guess you'd need to weigh up the energy cost of compressing the image versus the cost of serving them all ~6% larger every time.

[+] pmlnr|4 years ago|reply
If you just want colour, why not add some clever, but simple SVG graphics?
[+] notreallyserio|4 years ago|reply
I remember back when many Medium articles I read were flooded with animated gifs for emphasis. What a mess.
[+] jmnicolas|4 years ago|reply
> These black-and-white images are then coloured according to the pertaining content category via the browser’s native image manipulation capacities.

But then thousands of browsers are going to consume more electricity to render the images. So if we're thinking globally it would be better to do the job once on the server.

[+] toto444|4 years ago|reply
> One of the fundamental choices we made was to build a static website.

The corollary of this one is : do not offer the possibility of creating a user account when it adds no value. Some analytics that no one cares about does not provide value for instance.

[+] kome|4 years ago|reply
I love the design of this webpage, such a polished look, yet not banal or corporate. It has a strong personality. And it's responsiveness is amazing. That's what good design is about. Also, no JS bullshit. What a breath of fresh air.
[+] leptoniscool|4 years ago|reply
Interesting, new technology tends to be "faster" or "more powerful" with energy usage as an after-thought. It's refreshing to see a site dedicated to the opposite end of that spectrum.
[+] powersnail|4 years ago|reply
The only JS I need on my blog is Mathjax. Is there a static site generator that can render latex equations into svg's automatically?

On the one hand, I want to get rid of JS. On the other hand, I prefer to keep the plain text equation in my markdowns, rather than generating and linking to an image.

I feel like I'm one long weekend away from building my own SSG to do this.

[+] andrewfromx|4 years ago|reply
The author had me right up until “However, visuals are an important part of Low-tech Magazine’s appeal, and the website would not be the same without them.” And the next image was like hmmmm that image is smaller and dithered but was it really needed? If you really cared about the environment you would have just gone without that image and saved 100% of the file size. And then should I have children? Each one will consume more resources might as well have zero. And then wait suicide would get rid of me? Where do you draw the line to still live but be good?
[+] Robotbeat|4 years ago|reply
The claim that the last 5% can only be provided by either higher embodied fossil fuel costs in the batteries OR by burning fossil fuels wouldn’t be true, BTW, if you used fuels which are non-fossil to provide that last 5% of energy. It’s problematic to have a sort of alchemical understanding of energy that it HAS to at some point come from fossil fuels OR you have to significantly reduce capability somehow (in this case, website reliability).

In fact, the US didn’t use hardly any fossil fuels (virtually all wood or water) until well after the Industrial Revolution (late 1700s to 1820). Even for ironmaking, the US primarily used charcoal (not coal) until around the 1840s, after the first Industrial Revolution, and coal didn’t exceed wood for fuel in the US until the mid 1880s. American industry (and much of Britain’s) was powered by water power, channeled to different buildings, etc. American trains and steamboats ran on biomass. Most US iron used to build these things was made using biomass. (Britain was different because they simply didn’t have as much land and had cut down their trees already.) Sustainably harvested biomass (or electrically synthesized fuels like hydrogen) is a valid source for that last 5%.

Ultimately, there’s a large energy cost to non-reliability, BTW. Any kind of real automation (which mass production—and thus abundance and solar panel production—relies on) relies on its components being highly reliable. Non-automated handcrafted everything means large embodied labor, which means energy usage from human metabolism.

[+] human|4 years ago|reply
I can’t help but say that even WordPress can be served as a "static site" if caching is done right. And maybe the fact that I use an ad-blocker is saving a few watts per week.
[+] TheDudeMan|4 years ago|reply
Lose the dithering and use a sane image format.
[+] docflabby|4 years ago|reply
Could probably host 10000+ static websites on a Mac m1, low power usage and built in UPS...
[+] juloo|4 years ago|reply
What does low tech means in this context ? It's still a computer, with complication on top: solar panels, battery.

All the hardware they use requires some energy to be built, the network consumes a lot of energy too.

How does this compare to a VPS server at a cloud provider or using the grid electricity (no solar panel) ?

I'd expect individual VMs to use less energy compared to a decicated computer.

Is it really greener to stop using the grid electricity but instead buy a small solar panel ? Surely that'll be less green than the grid 20 years in the future.

The article contains many sources but doesn't compare other solutions. 2.5 watts is impressive, though.

[+] INTPenis|4 years ago|reply
Low tech in this case means off grid.

Imo low tech does not have to mean low availability, low tech website to me is a static website hosted on object storage with some cloud based CDN infront.

[+] geon|4 years ago|reply
> All resources loaded, including typefaces and logos, are an additional request to the server, requiring storage space and energy use. Therefore, our new website does not load a custom typeface and removes the font-family declaration, meaning that visitors will see the default typeface of their browser.

This is just nonsensical. Inline styles requires no extra request, and you can still use the fonts 99.9999 % of users are guaranteed to have.

[+] Cort3z|4 years ago|reply
While reading this article I have started a total of 29 ci/cd workflows, some of which are expected to run for 30 minutes probably consuming at least 200 watts (assuming some kind of xeon computer) each for the duration, I have consumed an estimated 1700Wh (19 * 10 minutes, 10 * 30 minutes), or enough to run this website for between 28 - 71 days. Not sure what to think about that.