You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs.
In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware).
You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you.
I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days.Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be.
I find it a blissful way to work.
le-mark|3 months ago
zejn|3 months ago
You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too.
Virtualisation was in the early days and most of the services were co-hosted on the server.
Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes.
Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction.
If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed.
BirAdam|3 months ago
lelanthran|3 months ago
I think this is an important point. It's quick.
When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure.
So, yeah, the choices were:
1. Wait 6 months to spend out of capex budget
Or
2. Use the opex budget and get something in 10m.
We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation.
mattstainton001|3 months ago
alphager|3 months ago
Back when AWS was starting, this would have taken 1-3 days.