top | item 45834050

(no title)

macspoofing | 3 months ago

>It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server ...

You can get all that with a monolith server and a Postgres backend.

discuss

order

benterix|3 months ago

With time, I discovered something interesting: for us, techies, using container orchestration is about reliability, zero-downtime deployments, limiting blast radius etc.

But for management, it's completely different. It's all about managing complexity on an organizational level. It's so much easier to think in terms "Team 1 is in charge of microservice A". And I know from experience that it works decently enough, at least in some orgs with competent management.

kace91|3 months ago

It’s not a management thing. I’m an engineer and I think it’s THE main advantage micro services actually provide: they split your code hard and allow a team to actually get ownership of the domain. No crossing domain boundaries, no in between shared code, etc.

I know: it’s ridiculous to have an architectural barrier for an organizational reason, and the cost of a bad slice multiplies. I still think in some situations, that is better to the gas-station-bathroom effect of shared codebases.

embedding-shape|3 months ago

> using container orchestration is about reliability, zero-downtime deployments

I think that's the first time I've heard any "techie" say we use containers because of reliability or zero-downtime deployments, those feel like they have nothing to do with each other, and we've been building reliable server-side software with zero-downtime deployments long before containers became the "go-to", and if anything it was easier before containers.

Towaway69|3 months ago

As soon there is more than one container to organise, it becomes a management task for said techies.

Then suddenly one realises that techies can also be bad at management.

Management of a container environment not only requires deployment skills but also documentational and communication skills. Suddenly it’s not management rather the techie that can't manage their tech stack.

This pointing of fingers at management is rather repetitive and simplistic but also very common.

9dev|3 months ago

You don't. When your server crashes, your availability is zero. It might crash because of a myriad of reasons; at some times, you might need to update the kernel to patch a security issue for example, and are forced to take your app down yourself.

If your business can afford irregular downtime, by all means, go for it. Otherwise, you'll need to take precautions, and that will invariably make the system more complex than that.

macspoofing|3 months ago

>You don't. When your server crashes, your availability is zero.

As your business needs grow, you can start layering complexity on top. The point is you don't start at 11 with a overly complex architecture.

In your example, if your server crashes, just make sure you have some sort of automatic restart. In practice that may mean a downtime of seconds for your 12 users. Is that more complexity? Sure - but not much. If you need to take your service down for maintenance, you notify your 12 users and schedule it for 2am ... etc.

Later you could create a secondary cluster and stick a load-balancer in-front. You could also add a secondary replicated PostgreSQL instance. So the monolith/postgres architecture can actually take you far as your business grows.

wouldbecouldbe|3 months ago

Yeah theoretically that sounds good. But I had more downtime through cloud outages, Kubernetes updates then I ever had using simple linux server with nginx on hardware; most outages I had on linux was with my VPS was due to Digital Ocean issue with their own hardware failures. AWS was down not so long ago.

And if certain servers do get very important you just run a backup server with VPS and switch over DNS (even if you keep a high ttl, most servers update within minutes nowadays) or if you want to be fancy throw a load balancer in front of it.

If you solve issues in a few minutes people are always thankful, and most dont notice. With complicated setups it tends to take much longer before figuring out what the issue is in the first place.

sfn42|3 months ago

I don't see how you solve this with microservices. You'll have to take down your services in these situations too, a monolith vs microservices soup has the exact same problem.

Also in 5 years of working on both microservicy systems and monoliths, not once has these things you describe been a problem for me. Everything I've hosted in Azure has been perfectly available pretty much all the time unless a developer messed up or Azure itself has downtime that would have taken down either kind of app anyway.

But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. I'd say it's more likely the added complexity will cause more downtime than it saves.

danmaz74|3 months ago

You can have redundancy with a monolithic architecture. Just have two different web server behind a proxy, and use postgres with a hot standby (or use a managed postgres instance which already has that).

pjmlp|3 months ago

Well, load balancers are an option.

tnel77|3 months ago

In this job market, how am I supposed to get hired without the latest buzzwords on my resume? I can’t just have monolithic server and Postgres!

(Sarcasm)

spoiler|3 months ago

You're sarcastic, but heavens above, have I had some cringe interviews in my last round of interviews, and most of the absurdity came from smaller start-ups too

chistev|3 months ago

Indicating sarcasm ruins the sarcasm

YetAnotherNick|3 months ago

Most times it isn't complexity that bites, it is the brittleness. It's much easier to work with bad but well documented solution(e.g github actions) where all the issues have been faced by users and workaround is documented by community, rather than rolling out your own(e.g. simple script based CI/CD).