top | item 37218773

(no title)

jbott | 2 years ago

This is linked in the footer of this page, but Graham Christensen has an excellent blog post “Erase your darlings” [1] that explains why you might want to do this. There is a script floating around on github [2] that does the install automatically, but I needs to be modified slightly to work on newer nixos versions. I have a forked version [3] that I used this morning, but my decisions might not make sense for everyone, but it’s provided as-is for now :)

1. https://grahamc.com/blog/erase-your-darlings/

2. https://gist.github.com/mx00s/ea2462a3fe6fdaa65692fe7ee824de...

3. https://gist.github.com/jbott/531b9d555dae7f197f25326ef251f1...

discuss

order

SkyMarshal|2 years ago

There’s an even better setup than Erase your Darlings. Instead of putting / on a zfs pool and erasing and rewriting it every reboot, just put tmpfs on / instead.

With / all in memory it automatically gets wiped and recreated every reboot, without needing to actively erase a disk, and thus with much less drive wear (depending on how frequently you reboot). It’s also faster on some things when loading from RAM instead of the disk. And it’s overall a cleaner, simpler setup.

https://elis.nu/blog/2020/05/nixos-tmpfs-as-root/

You can also put tmpfs on home as well, using Impermanence and Home Manager to persist things like ~/.config and whatever other files or folders need to persist between reboots:

https://elis.nu/blog/2020/06/nixos-tmpfs-as-home/

tmpfs on / is enabled by NixOS’s unique design, in which it keeps the entire system in /nix/store and then softlinks all the paths into their appropriate place in /. With tmpfs on /, NixOS automatically recreates those softlinks in tmpfs on reboot. Very little setup effort is required to make this work.

Cu3PO42|2 years ago

I was aware of the possibility of just using tmpfs, but I went with ZFS for / anyway for mostly one reason: I can erase root on every boot, but still keep snapshots of the last few boots. That means, that if I mess up some configuration and I fail to persist some important data, I can still go back and recover it if I need to.

jbott|2 years ago

I don’t think the point about disk wear is true. ZFS is a block-based CoW filesystem already, so the “erase” is actually something more like “make a new metadata entry that points to an earlier state snapshot”. Plus, I’d expect normal disk usage (ie. chrome disk cache) probably far outweighs any disk wear you’d get from the (relatively) small size of files you get written into /.

Unless you’re working on a server with a ton of ram, I also think using tmpfs is more likely to shoot yourself in the foot with excess memory pressure. I don’t know of a way for the kernel to free memory if you write a huge file to the tmpfs partition by mistake, unless you use swap, and then you have the problems that come with that.

Tmpfs might be faster though!

laurencerowe|2 years ago

You can also use have tmpfs on / as an overlay file system. This is quite common for network boot devices that all use a shared nfs root.

someplaceguy|2 years ago

That's exactly what this HN post is about. The wiki page being linked to has instructions for doing that.

Tainnor|2 years ago

Sounds interesting in theory, but modern development setups are often full with tools, dependencies, IDE indices, intermediate build results, etc. pp. that all may take a very long time to download and build from scratch. Sure, you could try to keep track of all such things and persist them but it's going to be a lot of effort trying to figure out where all your tools are dumping their state.

solatic|2 years ago

I don't think of impermanence as a tool for development setups, but rather a tool to improve production security. When a server gets compromised, it's common for an attacker to leverage their initial access to set up backdoor access for themselves, e.g. an additional privileged user or privileged service which phones home, so that they're no longer reliant on the original vulnerability to gain access again. This is important to ensure that they can launch a more damaging attack at a more opportune time (e.g. at the beginning of a long weekend). Now consider a stateful server which you need to host (e.g. Kubernetes control plane / etcd) where you ordinarily cannot practice immutable infrastructure due to the stateful nature of the server. Modules like impermanence allow you to guard against this kind of compromise by simply wiping out everything but the actual state as a result of rebooting. Any privileged users or malicious processes (which, of course, are not part of the system configuration used at boot) get wiped out at every reboot. It's not a silver bullet - an attacker could simply releverage the original vulnerability and set up access again - but doing the reboots frequently would force the vulnerability to be re-exploited each time, making it a pattern of access more likely to be detected in a SIEM.

yjftsjthsd-h|2 years ago

> often full with tools, dependencies, IDE indices, intermediate build results

Tools and dependencies go in nix. Indexes and temporary build stuff is just cache, which you can still have (I'd lean towards regenerating per reboot, but YMMV).

> but it's going to be a lot of effort trying to figure out where all your tools are dumping their state.

Fair. Kind of an indictment of the current state of the ecosystem, but yes.

Cloudef|2 years ago

With NixOS, you typically use flake.nix or shell.nix to "shell into" development environment so this is kinda non issue. You can also use `nix run` or `nix-env` to mix traditional package management and this.

> but it's going to be a lot of effort trying to figure out where all your tools are dumping their state.

This is true though, if you dont want to manage all of their configs from nix, I would make their dump locations, or $XDG_DIRS/$HOME (if you are lazy) permanent in this case.