(no title)
densh | 5 months ago
And suddenly you can start playing with distributed software, even though it's running on a single machine. For resiliency tests you can unplug one machine at a time with a single click. It will annihilate a Pi cluster in Perf/W as well, and you don't have to assemble a complex web of components to make it work. Just a single CPU, motherboard, m.2 SSD, and two sticks of RAM.
Naturally, using a high core count machine without virtualization will get you best overall Perf/W in most benchmarks. What's also important but often not highlighted in benchmarks in Idle W if you'd like to keep your cluster running, and only use it occasionally.
globular-toast|5 months ago
I run a K8s "cluster" on a single xcp-ng instance, but you don't even really have to go that far. Docker Machine could easily spin up docker hosts with a single command, but I see that project is dead now. Docker Swarm I think still lets you scale up/down services, no hypervisor required.
motorest|5 months ago
You're describing people using RPis to learn distributed systems, and you conclude that these RPis are wasted because RPis were made for paedogogy?
> I run a K8s "cluster" on a single xcp-ng instance, but you don't even really have to go that far.
That's perfectly fine. You do what works for you, just like everyone else. How would you handle someone else accusing your computer resourcss of being wasted?
qmr|5 months ago
subscribed|5 months ago
anaganisk|5 months ago
malux85|5 months ago
It was also how I learned to setup a Hadoop cluster, and a Cassandra cluster (this was 10 years ago when these technologies were hot)
Having knowledge of these systems and being able to talk about how I set them up and simulated recovery directly got me jobs that 2x and then 3x my salary, I would highly recommend all medium skilled developers setup systems like this and get practicing if you want to get up into the next level
cyberpunk|5 months ago
bee_rider|5 months ago
le-mark|5 months ago
user432678|5 months ago