Fun. I was one of the software developers on the SPARCbook series (prior to the acquisition), as well as the Alpha machine for Digital, the P1000/P1300 (Pentium), and IBM (PowerPC).
Great memories of a small team taking on huge challenges.
That's awesome! My grandpa had an alpha desktop back in the '90s and I loved that thing. It was my first non-apple or ms-dos experience so I have a soft spot, and I haven't heard anyone mention those in years.
I used a SPARCbook back in 1996 for a business trip to Germany to install our search engine at t-online.de. The CFO of the company wanted to know who had this machine at all times because it was so expensive, around $20k USD.
It didn't have any usable battery life, if you tried to do any type of development, it would last about 20 minutes before it shutdown.
The many disk partitions made us get creative to fill the 1.2GB disk with data. We had to put files all over the place, and use softlinks get our software to work. This odd setup lead a sysadmin to "clean things up" one afternoon. He delete most of the OS in the process. He was really mad because he had to stay up all night to do a reinstall.
At the time I remember being amazed that the company I worked for paid out $$$ to get some SGI machines up to ~160Mb of RAM. This was because the software we used had a memory leak of some sorts and needed such an extravagantly 'wasteful' amount of memory.
Having the sockets to support up to 160Mb was one thing, having long enough arms and deep enough pockets to pay for it was something else. Hence 'unimaginable'.
Nowadays you get a couple of sockets on your laptop, back then whether laptop or something else, a workstation would have lots and lots of sockets for memory, usually with very few of them filled.
In 1997, a powerful desktop PC had 64 MB. 128 MB was definitely workstation-class, and extremely rare on a laptop. For reference, the SGI O2 workstation came out in late 1996, and standard configurations were 64, 128 and 256MB of RAM, and maxed out at 1 GB (which cost several thousand dollars). The previous SGI Indy (1992 to 1996), still a real beast in 1997 if you had the 180Mhz R5000 version, maxed out at 256 MB.
Remember that back then, hardware evolved really fast; I had a 128 MB laptop in 2000. So it went from "unheard of" to "standard" in 5 years or so.
My current work machines have had 8 GB for 7 years :)
Back in the late 90s most consumers and even some power users were using 486 laptops (myself included) because Pentium laptops were still pricey. Affordable laptops back then were usually a generation behind desktops in speed and hardware support; there wasn't a mobile culture like there is today. When you did get a PowerBook or HP like you mentioned, you paid through the nose for it.
I was using a TI TravelMate 4000M (made by Acer) from 1995 to 1998, it was a graduation gift from my estranged father. It propelled me into modern (for the time) computing and set me on the course to the career I have today. I never upgraded beyond 8MB of RAM (it came with 4MB on board and supported a 4MB or 16MB additional module) but that was enough to do what I needed at the time.
I would have loved to have an advanced workstation laptop like in the OP article, but I didn't have $21k laying around for something like that.
>> "A whopping 128MB of RAM (in 1997, this unimaginable in a laptop)" [sic]
> Really? The PowerBook in 1997 supported up to 160 MB
I think it would have been pretty unusual to have that much RAM installed at that time. I remember my parents upgrading our PC from 8MB to 16MB at about that time (or maybe 16MB to 32MB), and that was more than any of my friends had.
The PowerBooks weren’t that far away from the Sparc boxes. I used to sell them at retail in college to doctors and similar folks. They’d frequently walk out dripping $10k on the Amex on the device, software, gear, etc.
I had a Toshiba Tecra that was loaded from a memory perspective and that device retailed around $4k.
I've got a couple Tadpoles, including a PA-RISC PrecisionBook, but actually I like the Sun Ultra-3 better. The one I have is a rebadged Tadpole Viper in beautiful purple. The 1.2GHz CPU gets stinking hot unless you throttle it and the battery life is about LOL minutes, but it can still run relatively recent Firefoxes and Solaris is still useful.
I had a SPARCstation Voyager for a while when I worked at Sun. One of the very first all-in-ones ever with a color LCD screen. (My "regular" Ultra2 desktop also had a couple of 24" HD monitors, which weighed more than a sack of concrete each, and were much bulkier, too.)
Little-known story about how this little box saved the shuttle program: When Dan Goldin took over NASA, he mandated that all of Mission Control would run on Windows NT. (This was back when the NT kernel was excellent, as it always had been, but the surrounding bits had serious stability problems). The shuttle astronaut corps rebelled, literally refusing to fly if forced to bet their lives against the BSOD. The solution was to provide the astronauts with their own independent copy of the software that Mission Control previously ran on their SPARCstations, so even if MCC crashed, the shuttle wouldn't. Since the SPARCbook was already flight-qualified, the astronaut revolt was quickly and quietly resolved, with almost no one even being aware of it...
> Normally this isn’t a problem because you can boot a Solaris installation CD or network image and clear the root password in the /etc/shadow file. BUT....I have no SCSI CD-ROM that I can plug into the external SCSI port on the SPARCbook
That brought back a memory. I was working at a place in the 90s that got one of those SPARCbooks that we were setting up for a customer, but we didn't have a SCSI CD-ROM drive to install with. I actually went to my previous employer (a Sun workstation support team in the computer center of a university) who kindly let us use one of theirs on site.
At some point I lost my lust for obscure hardware like this, although I remember what it felt like. I'm really disappointed he didn't take an image of the disk as it was when he got it, because it would have been really interesting to root through all the old Nortel stuff.
I still have all the code for sure - I spent a lot of time pouring over it recently. They spent a lot of time creating elaborate software-based mobile test environments to test it, and I get the impression that the project was more of a proof-of-concept thing as a result. The code was nicely modularized and somewhat tight (with a few things that drove me nuts).
Oh man, I remember seeing one of these (or something really similar) when I went with my dad to work as a kid. When he told me that it cost more than $19,000, I flatout didn't believe him. To a nine-year-old me, there was not an appreciable difference between $19,000 and $1,000,000,000 , and I could not fully understand how anyone would pay that much for anything.
It's a more staggering number now - I get $34k using the CPI inflation calculator. Can you imagine paying that much for a single laptop? Does anyone even make laptops that cost that much now?
The display in the images looks quite sharp - resolution of 1280x1024! Not too awful by today’s standards. Lenovo’s thinkpad x280, currently available with a retail price above 1000 euro, comes with a 1366x768 display as the default option.
ZFS takes this concept up one level of abstraction. You have a `zpool` which is a collection of disks. You then have "datasets" which would be analogous to partitions. One property of a dataset is a mountpoint, but they have many more. In ZFS different datasets can have different compression algorithms, different checksum algorithms, different log (journal) characteristics, different ACL types, it can also manage exporting the dataset over the network (setup NFS/CIFS for you), encrypt the dataset, etc.
That might give you a glimpse into why it's useful to split off different datasets. I might want to tune my database (/var/lib/postgres) for throughput, while my home directory is tuned for maximum compression & encrypted, while my public fileshares are unencrypted, etc.
Also it's often useful to have different partitions if you need different filesystems, not every filesystem is suited for every particular usecase. Also sometimes you're constrained by other software: for instance many older bootloaders would typically have very limited filesystem support; even today your EFI system partition needs to be FAT formatted, which should serve as an obvious reason why you'd want to segregate `/boot/efi` from the rest of your system.
If one filesystem gets full, you still can do some work on the others. You can also more easily umount a filesystem to fsck without the need of booting a live media. You can mount filesystems with different flags, like readonly on /boot to prevent accidents.
Modern smartphones do this, on Android for example you are able to find a filesystem for system binaries generally mounted readonly, a filesystem for the base OS and system apps, another filesystem for user installed apps. If a rogue app fills the entire filesystem it's resided, the system apps can still function.
/opt is where several commercial packages from Sun would install, so by being separate if you updated your OS then you wouldn't necessarily have to reinstall NeWSprint or separate products that were, honestly, a complete pain in the ass to install.
Yeah... It makes more sense for /var and even /usr/local. / and /usr are both managed by the OS installation. /opt makes a little more sense because it's often used by large, third party packages any one of which could be larger than the whole of / and /usr together.
If / or /var run out of space (even now) then lots of daemons get into quite a bit of trouble. Anything from hanging to consuming a lot of CPU in the disk allocator.
It's often not possible to ssh into a machine where / or /var has filled up.
There was also the problem of file system robustness. Whilst things were a lot better than the non-UNIX platforms of the day, things weren't quite as good as they are today. These filesystems often were not journalled. That meant that if you had a power failure you could lose the entire volume. (I've personally had at least one / partition be destroyed by fsck after a power failure. I was so, so glad I didn't lose /home. /opt and /usr/local as well!)
Depending on whether you were the user or the administrator dictates which partition you prefer to survive, but at least it adds some robustness. It's nice to have things separated based on how you will restore them. / and /usr will come from the vendor. /opt will probably be a whole load of different media from all over the place.
In some Unixen (like OpenBSD) it's done so that you can enforce different partition-wide security policies (e.g. "this partition should never have executable files").
A bit of historical legacy from NFS workstations, where it was useful to have root and swap local* (for speed), but /usr common across many workstations. Those separations kind of became habit.
*Of course it was possible to have root and swap on NFS as well (for truly diskless).
Splitting /usr off is mostly done for access control. Plus if the directories are already split up you have the option to move them around later. /opt is usually for 3rd party apps that arn't part of your OS distribution. Like apps that come with precompiled binaries and libraries you want to make available system wide for example.
The primary user if these laptops was the U. S. military and they saw combat on the battlefield. Garrett D'Amore, father of the illumos project, used to work on the drivers for this laptop at Tadpole.
Can't we just somehow copy that beautiful keyboard/case to make a modern open implementation of it? We can upgrade all the internals after copying the original format, and do all the prototyping with 3D printing.
Please tell me someone else has a screw loose and wants to do this too.
Now there's a blast from the past. As a young researcher, I got to have a Sparcbook circa 1994 to take to events and demo the multicast multimedia conferencing software we were working on. It cost about five times more than everything I owned put together, and I was always afraid I'd forget it in the pub or something. The alternative was traveling with a Sparc 20 in my hand luggage, which I did too on many occasions. People were always more impressed by seeing video on a laptop though. These days, you think nothing of it.
Oh man - we had one of those tadpoles back in the dot com days! We got our Sparc Oracle/Weblogic stack loaded up on it just like our 'real' servers. Our CEO took it to a very important conference to do a live demo, took to the podium, attempted to adjust the microphone... and the cord tipped a cup of water onto it. Killed the laptop, but he still pulled off what he was pitching. We were all hoping to run off with it after the event.
I remember those.. I used to have one, it was so unbelievably useful. I ended up turning it into a fastboot server in an emergency. Then... we spent ages making it Linux :)
[+] [-] gonzo|7 years ago|reply
Great memories of a small team taking on huge challenges.
[+] [-] sodosopa|7 years ago|reply
[+] [-] rhombocombus|7 years ago|reply
[+] [-] neonate|7 years ago|reply
[+] [-] kev009|7 years ago|reply
[+] [-] dougb|7 years ago|reply
Wow, I forgot about that machine until this post.
[+] [-] gonzo|7 years ago|reply
[+] [-] ken|7 years ago|reply
Really? The PowerBook in 1997 supported up to 160 MB: https://en.wikipedia.org/wiki/PowerBook_G3#Models
I don't think this was particularly unusual. HP's laptop of the day supported up to 160 MB, as well: http://www.computinghistory.org.uk/det/37569/HP-OmniBook-570...
[+] [-] Theodores|7 years ago|reply
Having the sockets to support up to 160Mb was one thing, having long enough arms and deep enough pockets to pay for it was something else. Hence 'unimaginable'.
Nowadays you get a couple of sockets on your laptop, back then whether laptop or something else, a workstation would have lots and lots of sockets for memory, usually with very few of them filled.
[+] [-] wazoox|7 years ago|reply
Remember that back then, hardware evolved really fast; I had a 128 MB laptop in 2000. So it went from "unheard of" to "standard" in 5 years or so.
My current work machines have had 8 GB for 7 years :)
[+] [-] morganvachon|7 years ago|reply
I was using a TI TravelMate 4000M (made by Acer) from 1995 to 1998, it was a graduation gift from my estranged father. It propelled me into modern (for the time) computing and set me on the course to the career I have today. I never upgraded beyond 8MB of RAM (it came with 4MB on board and supported a 4MB or 16MB additional module) but that was enough to do what I needed at the time.
I would have loved to have an advanced workstation laptop like in the OP article, but I didn't have $21k laying around for something like that.
[+] [-] ardy42|7 years ago|reply
> Really? The PowerBook in 1997 supported up to 160 MB
I think it would have been pretty unusual to have that much RAM installed at that time. I remember my parents upgrading our PC from 8MB to 16MB at about that time (or maybe 16MB to 32MB), and that was more than any of my friends had.
[+] [-] Spooky23|7 years ago|reply
I had a Toshiba Tecra that was loaded from a memory perspective and that device retailed around $4k.
[+] [-] cb88|7 years ago|reply
Might have been something as crazy as installing ram with a custom address line soldered on etc...
The 3GX from 1994 also supported 128MB but the simms had to be low profile and fast enough... so a bit hard to source.
[+] [-] classichasclass|7 years ago|reply
[+] [-] dublin|7 years ago|reply
[+] [-] cptnapalm|7 years ago|reply
[+] [-] classichasclass|7 years ago|reply
[+] [-] dublin|7 years ago|reply
[+] [-] jasoneckert|7 years ago|reply
[+] [-] gonzo|7 years ago|reply
[+] [-] drivers99|7 years ago|reply
That brought back a memory. I was working at a place in the 90s that got one of those SPARCbooks that we were setting up for a customer, but we didn't have a SCSI CD-ROM drive to install with. I actually went to my previous employer (a Sun workstation support team in the computer center of a university) who kindly let us use one of theirs on site.
[+] [-] gnu8|7 years ago|reply
[+] [-] drmpeg|7 years ago|reply
Netscape 4.76 on Solaris 8. It took a while to find a website that still rendered.
http://www.w6rz.net/netscape.png
[+] [-] juhanima|7 years ago|reply
I think he did.
[+] [-] jasoneckert|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] tombert|7 years ago|reply
[+] [-] gwern|7 years ago|reply
[+] [-] cmrdporcupine|7 years ago|reply
[+] [-] ardfie|7 years ago|reply
[+] [-] ken|7 years ago|reply
[1]: https://www.manualslib.com/manual/1454949/Tadpole-Sparcbook-...
[+] [-] linksnapzz|7 years ago|reply
[+] [-] monocasa|7 years ago|reply
[+] [-] drbawb|7 years ago|reply
That might give you a glimpse into why it's useful to split off different datasets. I might want to tune my database (/var/lib/postgres) for throughput, while my home directory is tuned for maximum compression & encrypted, while my public fileshares are unencrypted, etc.
Also it's often useful to have different partitions if you need different filesystems, not every filesystem is suited for every particular usecase. Also sometimes you're constrained by other software: for instance many older bootloaders would typically have very limited filesystem support; even today your EFI system partition needs to be FAT formatted, which should serve as an obvious reason why you'd want to segregate `/boot/efi` from the rest of your system.
[+] [-] julioneander|7 years ago|reply
Modern smartphones do this, on Android for example you are able to find a filesystem for system binaries generally mounted readonly, a filesystem for the base OS and system apps, another filesystem for user installed apps. If a rogue app fills the entire filesystem it's resided, the system apps can still function.
[+] [-] NikkiA|7 years ago|reply
[+] [-] andyjpb|7 years ago|reply
If / or /var run out of space (even now) then lots of daemons get into quite a bit of trouble. Anything from hanging to consuming a lot of CPU in the disk allocator.
It's often not possible to ssh into a machine where / or /var has filled up.
There was also the problem of file system robustness. Whilst things were a lot better than the non-UNIX platforms of the day, things weren't quite as good as they are today. These filesystems often were not journalled. That meant that if you had a power failure you could lose the entire volume. (I've personally had at least one / partition be destroyed by fsck after a power failure. I was so, so glad I didn't lose /home. /opt and /usr/local as well!)
Depending on whether you were the user or the administrator dictates which partition you prefer to survive, but at least it adds some robustness. It's nice to have things separated based on how you will restore them. / and /usr will come from the vendor. /opt will probably be a whole load of different media from all over the place.
[+] [-] yellowapple|7 years ago|reply
[+] [-] throw0101a|7 years ago|reply
*Of course it was possible to have root and swap on NFS as well (for truly diskless).
[+] [-] microwavecamera|7 years ago|reply
[+] [-] brucemoose|7 years ago|reply
[+] [-] Annatar|7 years ago|reply
[+] [-] ajmarsh|7 years ago|reply
[+] [-] jason_slack|7 years ago|reply
[+] [-] analognoise|7 years ago|reply
Please tell me someone else has a screw loose and wants to do this too.
[+] [-] mhandley|7 years ago|reply
[+] [-] heelix|7 years ago|reply
[+] [-] peatmoss|7 years ago|reply
[+] [-] cik|7 years ago|reply