top | item 46938950

(no title)

knorker | 21 days ago

Yes, but the gains may be lost in the logistics of shipping the build binary back to the PC for actual execution.

An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.

In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these sources, but we didn't bother trying").

It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.

I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...

discuss

order

pdw|21 days ago

"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)

And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.

knorker|20 days ago

> "Shipping" wouldn't be a problem, they could just run it from a network drive.

This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

> just run it from a network drive.

It still needs to be transferred to run.

> I know which system I would choose for compiles.

All else equal, perhaps. But were you actually a developer in the 90s?

frumplestlatz|21 days ago

> I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem.

I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.

I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.

That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.

RupertSalt|20 days ago

I remember two huge speedups back in the day: `gcc -pipe` and `make -j`.

`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.

`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.

It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.

knorker|20 days ago

> Linux kernel builds took something like 3-4 hours on an i486

From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.