top | item 44660716

(no title)

josephscott | 7 months ago

"The temporary-files directory /tmp is now stored in a tmpfs" - https://www.debian.org/releases/trixie/release-notes/issues....

I am not a fan of that as a default. I'd rather default to cheaper disk space than more limited and expensive memory.

discuss

order

_mlbt|7 months ago

> You can return to /tmp being a regular directory by running systemctl mask tmp.mount as root and rebooting.

>The new filesystem defaults can also be overridden in /etc/fstab, so systems that already define a separate /tmp partition will be unaffected.

Seems like an easy change to revert from the release notes.

As far as the reasoning behind it, it is a performance optimization since most temporary files are small and short lived. That makes them an ideal candidate for being stored in memory and then paged out to disk when they are no longer being actively utilized to free up memory for other purposes.

hsbauauvhabzb|7 months ago

That seems like a bug with those applications which make use of the filesystem instead of performing in-memory operations or using named pipes.

chromakode|7 months ago

For users with SSDs, saving the write wear seems like a desirable default.

Aachen|7 months ago

I have yet to hear of someone wearing out an SSD on a desktop/laptop system (not server, I'm sure there's heavy applications that can run 24/7 and legitimately get the job done), even considering bugs like the Spotify desktop client writing loads of data uselessly some years ago

Making such claims on HN attracts edge cases like nobody's business but let's see

archargelod|7 months ago

I'm using OpenSUSE Tumbleweed that has this option enabled by default.

Until about a year ago, whenever I would try to download moderately large files (>4GB) my whole system would grind to a halt and stop responding.

It took me MONTHS to figure out what's the problem.

Turns out that a lot of applications use /tmp for storing files while they're downloading. And a lot of these applications don't cleanup on fail, some don't even move files after success, but extract and copy extracted files to destination, leaving even more stuff in temp.

Yeah, this is not a problem if you have 4X more ram than the size of files you download. Surely, this is a case for most people. Right?

hiq|7 months ago

How did you figure that this was the problem?

If it's easily reproducible, I guess checking `top` while downloading a large file might have given a clue, since you could have seen that you're running out of memory?

pluto_modadic|7 months ago

finally they're using a tmpfs. thank goodness <3

api|7 months ago

Wait... that means a misbehaving program can cause out of memory errors easily by filling up /tmp?

That's a very bad default.

CamouflagedKiwi|7 months ago

A misbehaving program can cause out of memory errors already by filling up memory. It wouldn't persist past that program's death but the effect is pretty catastrophic on other programs regardless.

paulv|7 months ago

The default configuration for tmpfs is to "only" use 50% of physical ram, which still isn't great, but it's something.