(no title)
eska | 2 months ago
Let’s say you have UI textures that you always need, common player models and textures, the battle music, but world geometry and monsters change per stage. Create an archive file (pak, wad, …) for each stage, duplicating UI, player and battle music assets into each archive. This makes it so that you fully utilize HDD pages (some small config file won’t fill 4kb filesystem pages or even the smaller disk sectors). All the data of one stage will be read into disk cache in one fell swoop as well.
On optical media like CDs one would even put some data closer to the middle or on the outer edge of the disc because the reading speed is different due to the linear velocity.
This is an optimization for bandwidth at the cost of size (which often wasn’t a problem because the medium wasn’t filled anyway)
swiftcoder|2 months ago
HDDs also have to real with fragmentation, I wonder what the odds that you get to write 150 GBs (and then regular updates in the 30GB range) without breaking it into fragments...
pixl97|2 months ago
Microsoft has a paper somewhere that shows IO speed starts to drop when fragments of files get below 64MB. So you can split that file up into a few thousand pieces without much performance loss at all.
everforward|2 months ago
Even if you pack those, there's no guarantee they don't get fragmented by the filesystem.
CDs are different not because of media, but because of who owns the storage media layout.
Karliss|2 months ago
Single large file is still more likely to be mostly sequential compared to 10000 tiny files. With large amount of individual files the file system is more likely to opportunistically use the small files for filling previously left holes. Individual files more or less guarantee that you will have to do multiple syscalls per each file and to open and read it, also potentially more amount of indirection and jumping around on the OS side to read the metadata of each individual file. Individual files also increases chance of accidentally introducing random seeks due to mismatch between the order updater writes files, the way file system orders things and the order in which level description files list and reads files.