top | item 20233483

Ramroot – Run Arch Linux Entirely from RAM (2017)

146 points| e18r | 6 years ago |ostechnix.com | reply

81 comments

order
[+] puppetmaster|6 years ago|reply
Many OS out there take advantage of RAM Disks. Alpine Linux for instance allows you to boot your root device on ram as a default. SmartOS runs a boot image completely on RAM, and keeps your storage free for a few configs and VMS.

Virtually all network booted computers run their OSs from RAM. If you have never pxe booted a machine, it is a beautiful experience once you overcome a few challenges: easy upgrades and rollbacks, being able to use a machine with different contexts/platforms by just rebooting, and having your servers cleaned up just by cycling (assuming you don't have local storage)

If you enjoy the idea of RAM root devices, please try pxe/ipxe to boot your computer from a network. Also, if you have a sufficiently fast network... it is probably faster than booting from disk!

EDIT: I missed the word "all" on the second paragraph, and another typo... sorry!

[+] hazeii|6 years ago|reply
Couldn't agree more, I've got at least a couple of dozen computers (from workstations and thin clients through 486s and Pi's) at home and only the main server and its backup have persistent storage.

Especially useful for Windows; it is rather slower diskless (compared to BSD/linux) but it makes Windows instances disposable.

Takes a bit of effort (more these days, FUVM systemd) but having a RAMdisk + diskless RO /usr is a great way of having computers everywhere all singing the same song.

[+] wayoutthere|6 years ago|reply
Ditto the love for PXE — booting from a SAN appliance stocked with a few dozen fast SSDs is amazing. When I was doing big VMWare deployments, I had a PXE bootstrap setup that helped me build an entire 40-node cluster from bare metal in an hour.

Even in the event you have to use some secure zone crap, it’s pretty trivial to build since all PXE requires are dhcpd and tfptd — which are often available by default on nearly any *nix variant.

[+] cwt137|6 years ago|reply
PXE boot FTW! I use to work on a project that used LTSP to make a couple dozen thin clients. The thin clients were slow, but the OS was fast. This isn't entirely the OS in RAM as a big chunk of the file system was mounted over the network, but most of the OS was in RAM.
[+] jandrese|6 years ago|reply
Sadly PXE boot of a modern Linux OS is easier said than done. Gone are the days of just handing out the kernel over tftp and providing a NFS root. SystemD gets super cranky and you can't even boot the thing without setting some undocumented flags.
[+] mehrdadn|6 years ago|reply
> Also, if you have a sufficiently fast network... it is probably faster than booting from disk!

Is this also true on Windows, with all the hardware initialization it has to do on boot when it finds new hardware?

[+] dekhn|6 years ago|reply
i ran a cluster of 6 nodes in 2001 with (before PXE) network book and NFS root. it was awesome.
[+] agumonkey|6 years ago|reply
Never having finished Home pxe setup is a great sadness of mine.
[+] codezero|6 years ago|reply
I built something like this back in 2002 when I was at Red Hat for a client that wanted to have their firewalls to have read-only configurations on a diskless system. They would update the rules/config/system by burning a new CD and booting it up.

It worked basically how a Live CD worked - creating a temporary filesystem in ram, and I only learned later that they already existed (I didn't know at the time and the Internet wasn't as good at finding things as it is today :) )

[+] cjbprime|6 years ago|reply
I wonder whether this achieves anything performance-wise that just cat'ing every file to /dev/null to warm up the buffer cache wouldn't achieve.

In theory, the kernel uses a buffer cache that will hold on to disk pages until they're invalidated by writes. It'll evict the cache if there's memory pressure. But this setup will presumably just crash if there's memory pressure, so that doesn't seem like a win for the RAM disk.

[+] jerf|6 years ago|reply
I can't hard-core prove anything, but I've read the theory on how if I'm accessing warm disk cache, putting things into a RAM disk shouldn't speed them up, and every time I've done it and tested it, putting things in a RAM disk is faster than having it on disk, even when the very act of copying the stuff into RAM should definitely have just warmed everything up just before the test. I've not done it very often, but every time I've tried it in the last ~15 years it's been the case. I don't know exactly why. I don't think I've done this test with my NVMe disk, though. Nominally, since the entire point of this exercise is that we never physically touch the drive it shouldn't matter what sort of drive it is we aren't touching, but reality and theory can differ.
[+] jolmg|6 years ago|reply
More than any performance boost that could be achieved from this, I think what's cooler is the ability to unplug the storage device that held the OS.

For example, if you want to run hardware diagnostics on multiple machines at the same time with 1 consistent, custom environment, you can just setup one USB stick with this, and plug it on each machine in turn to boot them up. You wouldn't need to keep the USB stick plugged in while they're running.

[+] jandrewrogers|6 years ago|reply
One of the big advantages is that you do not need to install file systems on your storage for single purpose servers, which allows your application to run on top of the raw block devices. Databases and similar run much better in this configuration (assuming they support raw block devices), especially if you are running on top of a hypervisor.
[+] bjackman|6 years ago|reply
Warming up the page cache is in fact already done on modern systems: http://manpages.ubuntu.com/manpages/xenial/man8/ureadahead.8...

I'd agree that making ureadahead more aggressive would indeed make more sense than just sticking your roots in RAM. It shouldn't impact your boot time very much and is much more flexible!

But maybe there's some special perf benefit to the ramroot approach..

[+] iguessthislldo|6 years ago|reply
This is neat, but nothing new. I remember using this builtin feature on live distros like Slax and DSL over a decade ago. It was fun to see old computers run (comparably) blazingly fast.
[+] Spivak|6 years ago|reply
The new thing is that it can be just be switched on/off on an existing install.

Sure people have been running distros from ram for a while but getting the tooling situated to make it like any other feature it super cool.

[+] manelmt|6 years ago|reply
Alternatively, one could use NBD (network block devices) to create a network block device that resides entirely on a server's RAM.

The nice thing about NBD is that its a super simple protocol, the server runs in user space, and it's easy to modify to suit own needs. I built sometime ago a version with block deduplication for a farm of disk-less clusters that had very little ram in ~1k LOC. Main disk had persistence activated, while swap drives were pure RAM.

[+] VictorSCushman|6 years ago|reply
It's worth mentioning Tiny Core Linux[1] which runs entirely from RAM as well! It's a wonderful little destro with a small footprint. I boot TCL off of a USB, load it entirely into RAM, and am able to use it as my daily driver.

[1]: tinycorelinux.net

[+] yellowapple|6 years ago|reply
I always keep TCL around as an emergency boot environment, Just In Case™.
[+] arcmags|6 years ago|reply
Author here. Awesome to see some interest in my project.

I'm currently working on the next version in my spare time (you'll see it in the dev branch). Improvements include: configuration now done via /etc/ramroot.conf, ability to specify actions taken for other partitions, ability to copy files to any location only when booting from RAM (allowing custom configs and whatnot to be used when in the live environment), new install hook that includes binaries and modules rather than adding them to /etc/mkinitcpio.conf, sudo will no longer be a required package, custom memory requirement settings, and more...

Also, I have gotten this to work on Debian, Ubuntu, and Kali with minor modifications. I plan to include a makefile for installing to these distros but don't plan on packaging for them at this time.

https://github.com/arcmags/ramroot

[+] adolph|6 years ago|reply
Once upon a time I did this using Mac OS 6 on a PowerBook 100. The 100 had pseudostatic RAM so the RAM disk would persist between shutdowns. Using OS6 left enough space on the disk to also run an old-for-the-time version of Word, a perfect silent student writing machine.

Some info about it here:

http://www.pugo.org/collection/faq/21/

[+] equalunique|6 years ago|reply
Surprised no one here has mentioned mfsBSD. It's an unofficial FreeBSD answer to this problem. Works very well for some maintenance tasks, like ensuring a new install's disks are clear of partitions/zpools. I have even booted it's ISO via IPMI over the internet with OpenVPN.

https://mfsbsd.vx.sk/

[+] nathanasmith|6 years ago|reply
I'm on mobile so I'll keep this short but I found this[0] a long time ago and it still seems to work. It uses strace, mmap, and mlock to load any program you want and it's libraries, etc. into RAM but only those programs so if you're short on memory, no problem. During the setup, you can even mouse around in the program and preload anything involved in that. Back in the day I used it on really slow stuff like OpenOffice, Firefox, GIMP, etc. and it sped the opening of those programs significantly. The great thing, again, is you set it to preload only the specific things you need into RAM and nothing you don't. And, once done, it's pretty much set and forget.

[0]https://forums.gentoo.org/viewtopic-t-622085-start-0.html

[+] ryanmjacobs|6 years ago|reply
I run Arch Linux and I'm really excited to try this out tonight. I always make huge /tmp ramdisks (50 GB+) and run everything in there that's filesystem intensive. It's so much faster. This could be perfect for easy, stateless Arch Linux servers.
[+] benj111|6 years ago|reply
"Please note that this prompt (y/N) defaults to yes with a 15 second timeout if 4G or more of RAM is detected"

I understood the capitalisation to indicate the default. Or does the default change? That seems bad if you're intending to wrap this in a script.

[+] jlgaddis|6 years ago|reply
The author's demo/test machine only had 2 GB of RAM so perhaps it defaulted to "N" for that reason.
[+] calebm|6 years ago|reply
In college, I would use computer lab computers as a proxy for my... experiments. I would just load up a small Linux distro from a CD into ram. That way, I could just reboot the computer when done, and it would reboot into Windows.
[+] LinuxBender|6 years ago|reply
I would love to see this feature added to CentOS/RHEL. In the past, we used NFS Diskless which is anything but diskless. Hacking together an initrd that loads everything into ram is, well, hacky. To have a dracut function or a toggle to enable this would be great for lightweight deployments, testing, labs, vagrant, etc... I'm sure that RHEL must have been contemplating it because there is /etc/sysconfig/readonly-root unless that was just a better way to do NFS diskless...
[+] jabl|6 years ago|reply
We use warewulf (via openhpc) for diskless centos 7 compute nodes at work. Basically initramfs creates a tmpfs, downloads the OS image to it, switches root to it.
[+] bfgpereira|6 years ago|reply
This is nothing new, or exciting. Most Linux distros will do this either via PXE, with NFS root, or rsync a rootfs to RAM and then boot, etc etc etc. There are literally so many ways of doing this, it would take me eons to stop the creativity kill.

What do so something cool? Boot a whole computer cluster using BitTorrent as a backend, diskless, diskfull, at lightning speeds: https://github.com/dchirikov/luna

[+] wil421|6 years ago|reply
I believe FreeNAS does something like this during startup. You can “install” it on mirrored USB drives in case you run out of power. The redundant USBs are used to install the OS back into RAM. An extra USB in case of corruption and they’re way cheaper than SSDs. I’m assuming you still need to write config files back to the USB.

FreeNAS prefers ZFS so it’s RAM hungry anyway. The article recommend 500mb more than you need.

Can you do this to an Arch VM?

[+] imtringued|6 years ago|reply
SSDs cost 26€ for 128GB and offer significantly more performance and reliability. USB drives are not significantly cheaper.
[+] lordleft|6 years ago|reply
Out of curiosity, does data persist after a shutdown when an OS runs on RAM? Are there dumps to a HDD?
[+] monocasa|6 years ago|reply
Nope.

The piratebay used to run their servers like this. They'd boot off of a USB stick, pivot root to a ram disk, and then unmount the USB stick. Then when the cops would seize the machines and cut power to save as much evidence as they could, they'd actually be doing the opposite.

[+] ww520|6 years ago|reply
Unfortunately no. All data in the RAM disk are gone. You can mount a HDD after boot up to save data if needed.
[+] mlurp|6 years ago|reply
Does TAILS not do this? Forgive my ignorance, I only know the basic idea of it.
[+] mikepurvis|6 years ago|reply
Most live distros do this— rather than have an initramfs that finds and mounts the removable media, they just put everything in the initramfs itself.

What's new here is being able to optionally and seamlessly copy the regular disk install into RAM on boot.