top | item 45832803

What the hell have you built

325 points| sachahjkl | 3 months ago |wthhyb.sacha.house

231 comments

order

brap|3 months ago

I feel like sometimes it’s a form of procrastination.

There are things we don’t want to do (talk to costumers, investors, legal, etc.), so instead we do the fun things (fun for engineers).

It’s a convenient arrangement because we can easily convince ourselves and others that we’re actually being productive (we’re not, we’re just spinning wheels).

marfmarkus|3 months ago

It's the natural evolution to becoming a fun addict.

Unless you actively push yourself to do the uncomfortable work every day, you will always slowly deteriorate and you will run into huge issues in the future that could've been avoided.

And that doesn't just apply to software.

ratsimihah|3 months ago

My first 5 years or so of solo bootstrapping were this. Then you learn that if you want to make money you have to prioritise the right things and not the fun things.

whstl|3 months ago

Is it really for "fun"?

Or is it to satisfy the ideals of some CTO/VPE disconnected from the real world that wants architecture to be done a certain way?

I still remember doing systems design interviews a few years ago when microservices were in vogue, and my routine was probing if they were ok with a simpler monolith or if they wanted to go crazy on cloud-native, serverless and microservices shizzle.

It did backfire once on a cloud infrastructure company that had "microservices" plastered in their marketing, even though the people interviewing me actually hated it. They offered me an IC position (which I told them to fuck off), because they really hated how I did the exercise with microservices.

Before that, it almost backfired when I initially offered a monolith for a (unbeknownst to me) microservice-heavy company. Luckily I managed to read the room and pivot to microservice during the 1h systems design exercise.

EDIT: Point is, people in positions of power have very clear expectations/preferences of what they want, and it's not fun burning political capital to go against those preferences.

feketegy|3 months ago

It's also virtue signaling of what a great engineer they are. Have you wired together ABC with XYZ? No? Well I did... blah blah blah

pjmlp|3 months ago

An improved CV, lets be honest most stuff is boring projects that could even be built with 1990's technology, distributed systems is not something that was invented yesterday.

However having in the CV any of those items from left side in the deployment strategy is way cooler than mentioning n-tier architecture, RPC (regardless how they are in the wire), any 1990's programming language, and so forth.

A side effect from how hiring works so badly in our industry, it isn't enough to know how master a knife to be a chef, it must be a specific brand of knife, otherwise the chef is not good enough for the kitchen.

sjamaan|3 months ago

This is also how you can identify decent places to work at: look for job postings that emphasize you aren't expected to already know the language.

For example, in the recent "who's hiring" thread, I saw at least two places where they did that: Duckduckgo (they mention only algorithms and data structures and say "in case you're curious, we use Perl") and Stream (they offer a 10-week intro course to Go if you're not already familiar with it). If I remember correctly, Jane Street also doesn't require prior OCaml experience.

The place where I work (bevuta IT GmbH) also allowed me to learn Clojure on the job (but it certainly helped that I was already an expert in another Lisp dialect).

These hiring practices are a far cry from those old style job postings like "must have 10+ years of experience with Ruby on Rails" when the framework was only 5 years old.

damethos|3 months ago

This comment sums up my view as well, but I must confess that I’ve designed architectures more complex than necessary more than once, just to try new things and compare them with what I already knew. I just had to know!

forgetfulness|3 months ago

Any minute you spend in a job interview defending your application server + Postgres solution, is a minute that you will lack to talk of follow up questions about the distributed system that interviewer was expecting.

Yes, it’s nonsense, stirring up a turbulent slurry of eventually consistent components for the sake of supporting hundreds of users per second, it’s also the nonsense that you’re expected to say, just do it.

wewewedxfgdf|3 months ago

"Maybe Redis for caching".

Really that's going way too far - you do NOT need Redis for caching. Just put it in Postgres. Why go to this much trouble to put people in their place for over engineering then concede "maybe Redis for caching" when this is absolutely something you can do in Postgres. The author clearly cannot stop their own inner desire for overengineering.

fabian2k|3 months ago

I personally wouldn't like to put caching in Postgres, even though it would work at lower scales. But at that scale I don't really need caching anyway. Having the ephemeral data in a different system is more appealing to me as well.

The caching abstractions your frameworks have are also likely designed with something like Redis in mind and work with it out of the box. And often you can just start with an in-memory cache and add Redis later, if you need it.

noirscape|3 months ago

A cache can help even for small stuff if there's something time-consuming to do on a small server.

Redis/valkey is definitely overkill though. A slightly modified memcached config (only so it accepts larger keys; server responses larger than 1MB aren't always avoidable) is a far simpler solution that provides 99% of what you need in practice. Unlike redis/valkey, it's also explicitly a volatile cache that can't do persistence, meaning you are disincentivized from bad software design patterns where the cache becomes state your application assumes any level of consistency of (including it's existence). If you aren't serving millions of users, stateful cache is a pattern best avoided.

DB caches aren't very good mostly because of speed; they have to read from the filesystem (and have network overhead), while a cache reads from memory and can often just live on the same server as the rest of the service.

jeroenhd|3 months ago

Redis is the filler you shove in there when Postgres itself starts slowing down. Writing database queries that work and writing database queries that work efficiently are very different things.

It'll give you time to redesign and rebuild so Postgres is fast enough again. Then you can take Redis out, but once you've set it up you may as well keep it running just in case.

vlovich123|3 months ago

Postgres has support for an eventually consistent in-memory caching layer?

PretzelJudge|3 months ago

The sentiment here is right, but redis does make a difference at scale. I built a web app this year on AWS lambda that had up to 1000/requests/second and at that scale, you can have trouble with Postgres, but redis handles it like it’s nothing.

I think that redis is a reasonable exception to the rule of ”don’t complicate things” because it’s so simple. Even if you have never used it before, it takes a few minutes to setup and it’s very easy to reason about, unlike mongodb or Kafka or k8s.

dv_dt|3 months ago

Imho, if you can, use a fixed chunk of server memory directly for cache. That scales out with instances if/when you ever scale out

tclancy|3 months ago

Because they’re meeting the patients at their own level. Plus while using PG for everything is a currently popular meme on HN (and I am all for it), it’s not something you see all that often. An app server, a database and a cache is a pretty sensible and simple starting point.

Until you get to 100 test users. Then you need Kafka and k8.

xnorswap|3 months ago

That seemed odd to me too, they're talking about single server, which to me would mean running postgres on the application server itself.

In that scenario, the last thing you need is another layer between application and database.

Even in a distributed environment, you can scale pretty far with direct-to-database as you say.

willvarfar|3 months ago

Sometimes you have to pick your poison when those with other agendas or just inexperience want to complicate things. Conceding that they can use Redis somehow might be enough to get them to stop blaming you for the 'out of date' architecture?

douglee650|3 months ago

"Why is Redis talking to MongoDB?"

lol, In the diagram, Redis is not even talking with MongoDB

CommonGuy|3 months ago

Or do not use caching at all until you need it

simpleas987|3 months ago

just use the filesystem, it's superfast and reliable.

kiesel|3 months ago

I love the fact that the author "wrote" this page with massive CSS framework (tailwind) and some sort of Javascript framework, with a bundler and obfuscator - instead of a plain, simple HTML page. Well played! :-)

SpikeMeister|3 months ago

Fair, the author's point would have been stronger if the page was made using just static HTML/CSS.

But I have to defend Tailwind, it's not a massive CSS framework, it just generates CSS utility classes. Only the utility classes you use end up in the output CSS.

Bengalilol|3 months ago

React + Tailwind + bundler + googlefont + ... Yeah, humans are paradoxical

mb2100|3 months ago

haha, right?! I'm totally onboard with the author's philosophy, hence for websites: https://mastrojs.github.io – the simple web framework and site generator you could have built yourself.

9dev|3 months ago

It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server. If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application. You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing. You want proper secret management, so the database credentials aren't just accessible to everyone. You want a caching layer, so you're not surprised by a rogue SQL query that takes way too long, or a surge of users that exhaust the database connections because you never bothered to add proper pooling.

Adding guardrails to protect your team from itself mandates some complexity, but just hand-waving that away as unnecessary is a bad answer. At least if you're not working as part of a team.

macspoofing|3 months ago

>It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server ...

You can get all that with a monolith server and a Postgres backend.

isodev|3 months ago

I'm not sure why your architecture needs to be complex to support CI pipelines and proper workflow for change management.

And some of these guidelines have grown into satus quo common recipes. Take your starting database for example, the guideline is always "sqlite only for testing, but for production you want Postgres" - it's misleading and absolutely unnecessary. These defaults have also become embedded into PaaS services e.g. the likes of Fly or Scaleway - having a disk attached to a VM instance where you can write data is never a default and usually complicated or expensive to setup. All while there is nothing wrong with a disk that gets backed up - it can support most modern mid sized apps out there before you need block storage and what not.

Freak_NL|3 months ago

> You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing.

Make them part of your build first. Tagging a release? Have a documented process (checklist) that says 'run this, do that'. Like how in a Java Maven build you would execute `mvn release:prepare` and `mvn release:perform`, which will execute all tests as well as do the git tagging and anything else that needs doing.

Scale up to a CI pipeline once that works. It is step one for doing that anyway.

pjc50|3 months ago

I think that's a slightly different set of things to what OP is complaining about though. They're much more reasonable, but also "outside" of the application. Having secret management or CI (pretty much mandatory!) does not dictate the architecture of the application at all.

(except the caching layer. Remember the three hard problems of computer science, of which cache invalidation is one.)

Still hoping for a good "steelman" demonstration of microservices for something that isn't FAANG-sized.

omnicognate|3 months ago

Conway's Law:

> Organizations which design systems... are constrained to produce designs which are copies of the communication structures of these organizations.

lelanthran|3 months ago

> If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application.

That's fine, 6 of them are test accounts :-)

> It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues

If you have an entire organisation dedicated to 6 users, those users had better be ultra profitable.

> If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application

Can be done simply by a sole company owner; no need for tools that make sense in an organisation (K8s, etc)

> You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing.

A deployment script that includes test runners is fine for focused product. You can even do it using a green/blue strategy if you can afford the extra $5-$10/m for an extra VPS.

> You want proper secret management, so the database credentials aren't just accessible to everyone.

Sure, but you don't need to deploy a full-on secrets-manager product for this.

> You want a caching layer, so you're not surprised by a rogue SQL query that takes way too long, or a surge of users that exhaust the database connections because you never bothered to add proper pooling.

Meh. The caching layer is not to protect you against rogue SQL queries taking too long; that's not what a cache is for, after all. As for proper pooling, what's wrong with using the pool that came with your tech stack? Do you really need to spend time setting up a different product for pooling?

> dding guardrails to protect your team from itself mandates some complexity, but just hand-waving that away as unnecessary is a bad answer.

I agree with that; the key is knowing when those things are needed, and TBH unless you're doing a B2C product, or have an extremely large B2B client, those things are unnecessary.

Whatever happened to "profile, then optimise"?

zelphirkalt|3 months ago

Sure, but most of that doesn't make it into the final production thing on the server. CI? Nope. Tests? Nope. The management of the secrets (not the secrets themselves)? Nope. Caching? OK that one does. Rate limits? Maybe, but could be another layer outside the normal services' implementation.

Copenjin|3 months ago

Thinking is scary. No one (among non-thinking colleagues) is going to criticize you for using de-facto standard services like kafka, mongo, redis, ecc... regardless of the nonsensical architecture you come up with.

Yes, I also put Redis in that list. You can cache and serve data structure in many other ways, for example replicate the individual features you need in you application instead of going the lazy route and another service to the mix. And don't get me started on Kafka... money thrown in the drain when a stupid grpc/whatever service would do.

Part of being an engineer is also selecting the minimum amount of components for your architecture and not being afraid of implementing something on your own if you only need 1 of 100s features that an existing product require.

zigzag312|3 months ago

> No one (among non-thinking colleagues) is going to criticize you for using de-facto standard services

Well put!

Havoc|3 months ago

> Add complexity only when you have proof you need it.

This does assume that said complexity can be added ad hoc later. Often earlier architecture choices make additions complex too or even prevent it entirely without a complete rewrite

So while the overall message is true there is some liberal use of simplification at play here too

In some cases a compromise can make sense. Eg use k8s but keep it simple within that - as vanilla as you can make it

paulbjensen|3 months ago

Oh my word Riak - I haven't seen that DB mentioned for years!

I totally get the point it makes. I remember many years ago we announced SocketStream at a HackerNews meet-up and it went straight to #1. The traffic was incredible but none of us were DevOps pros so I ended up restarting the Node.js process manually via SSH from a pub in London every time the Node.js process crashed.

If only I'd known about upstart on Ubuntu then I'd have saved some trouble for that night at least.

I think the other thing is worrying about SPOF and knowing how to respond if services go down for any reason (e.g. server runs out of disk space - perhaps log rotation hasn't been setup, or has a hardware failure of some kind, or the data center has an outage - I remember Linode would have a few in their London datacenter that just happened to occur at the worst possible time).

If you're building a side project I can see the appeal of not going overboard and setting up a Kubernetes cluster from the get-go, but when it is things that are more serious and critical (like digital infrastructure for supporting car services like remotely turning on climate controls in a car), then you design the system like your life depends on it.

auxiliarymoose|3 months ago

I think remote climate controls in a car are an ideal use-case for a simpler architecture.

Consider WhatsApp could do 2M TCP connections on a single server 13 years ago, and Ford sells about 2M cars per year. Basic controls like changing the climate can definitely fit in one TCP packet, and aren't sent frequently, so with some hand-waving, it would be reasonable to expect a single server to handle all remote controls for a manufacturer for all cars from some year model.

Or maybe you could use wifi-direct and bypass the need for a server.

Or a button on the key fob. Perhaps the app can talk to the key fob over NFC or Bluetooth? Local/non-internet controls will probably be more reliable off-grid... can't have a server outage if there are no servers.

I guess my point is if you take a step back, there are often simple, good solutions possible.

estsauver|3 months ago

I built a small simple page that I send to people when they start proposing crazy db architectures that people might like if they like this page:

https://nocommasql.com/

hu3|3 months ago

just a nit. it pollutes back button history when I expand content. took 9 presses of back button to return to HN.

damethos|3 months ago

Useful, but 10 years ago without JSONB in PG it wasn't really the answer to everything. But as of today, I am recommending PG to anyone that does not have a good reason or use case to NOT use it.

isodev|3 months ago

This kind of complexity is unfortunately also embedded into model training data.

Left unchecked, Claude is very happy to propose "robust, scalable and production ready" solutions - you can try it for yourself. Tell it you want to handle new signups and perform some work like send an email or something (outside the lifecycle of the web request).

That is, implying you need some kind of a background workload and watch it bring in redis, workflow engines, multiple layouts for docker deployment so you can run with and without jobs, obscene amount of environment variables to configure all that, create "fallbacks" and retries and all kinds of things that you will never spend time on during an MVP and even later resist adding just because of the complexity and maintenance they require.

All that while (as in the diagram of the post), there is an Erlang/Elixir app capable of doing all that in memory :).

dvt|3 months ago

The fact that we have lambdas/serverless functions and people are still over-engineering k8s clusters for their "startup project" is genuinely hilarious. You can literally validate your idea with some janky Python code and like 20 bucks a month.

The problem is that people don't like hearing their ideas suck. I do this too, to be fair. So, yes, we spend endless hours architecting what we'd desperately hope will be the next Facebook because hearing "you are definitely not the next Facebook" sucks. But alas, that's what doing startups is: mostly building 1000 not-Facebooks.

The lesson here is that the faster you fail, the faster you can succeed.

deified|3 months ago

There is an argument I rarely ever see in discussions like this, which is about reducing the need for working memory in humans. I'm just in the mid thirties, but my ability to keep things in working memory is vastly reduced compared to my twenties. Might just be me who's not cut out for programming or system architecturing, but in my experience what is hard for me is often what is hard for others, they just either don't think about it or ignore it and push through keeping hidden costs alive.

My argument is this; even if the system itself becomes more complex, it might be worth it to make it better partitioned for human reasoning. I tend to quickly get overwhelmed and my memory is getting worse by the minute. It's a blessing for me with smaller services that I can reason about, predict consequences from, deeply understand. I can ignore everything else. When I have to deal with the infrastructure, I can focus on that alone. We also have better and more declarative tools for handling infrastructure compared to code. It's a blessing when 18 services doesn't use the same database and it's a blessing when 17 services isn't colocated in the same repository having dependencies that most people don't even identify as dependencies. Think law of leaky abstractions.

jagraff|3 months ago

This is a good point - having your code broken up into standalone units that can fit into working memory has real benefits to the coder. I think especially with the rise of coding agents (which, like it or not, are here to stay and are likely going to increase in use over time), sections of code that can fit in a context window cleanly will be much more amenable to manipulation by LLMs and require less human oversight to modify, which may be super useful for companies that want to move faster than the speed of human programming will allow.

ericzundel|3 months ago

I'm going through this decision right now. I agree, you are building a product with an unproven market and lots of time to grow organically, maybe you do want to start small and scrappy. Build something you can easily throw away and start over with. Build something that gets you to market as quickly as possible so you can pivot.

OTOH, If you are trying to sell the idea to investors and large companies that you are a serious player and have a plan and know-how to grow and scale your service quickly, maybe you do want to show that you have the design chops and ability to actually scale your product. Take a look and ask yourself, "Does my business model only work if it scales up dramatically, far beyond the capacity of a single database?" If the answer is "yes", start with a scalable architecture to save the 100+ person-years and endless gnashing of teeth it will take to untangle your monolith (been there.)

fredsted|3 months ago

CDD, or CV-driven development, as I like to call it.

buzzardbait|3 months ago

The alternative to CI/CD pipelines is to rely on human beings to perform the same repetitive actions the exact same way every single time without any mistakes. You would never convince me to accept that for any non-trivial project.

Especially in an age where you can basically click a menu in GitHub and say "Hey, can I have a CI pipeline please?"

1313ed01|3 months ago

No, the alternative is/was something like "make test" or "build_deploy_and_test.sh".

arealaccount|3 months ago

I think the 2 hours bit was the important part

danslo|3 months ago

s/postgres/sqlite/g

hshdhdhehd|3 months ago

Postgres is simpler. Get your cloud to manage it. Click to create instance, get failover with zero setup. Click button 2 to get guaranteed backups and snapshot point in time.

Komte|3 months ago

Don't agree. Getting managed postgress from one of the myriad providers is not much harder than using sqlite, but postgress is more flexible and future proof.

StarGrit|3 months ago

ORMs have better support I've found in the past (at least in .NET and Go) for Postgres. Especially around date types, UUIDs and JSON fields IIRC.

rcarmo|3 months ago

This. So much this. Of course, at one point you start wanting to do queues, and concurrent jobs, and not even WAL mode and a single writer approach can cut it, but if you've reached that point then usually you a) are in that "this is a good problem to have" scalability curve, and b) you can just switch to Postgres.

I've built pretty scalable things using nothing but Python, Celery and Postgres (that usually started as asyncio queues and sqlite).

anonzzzies|3 months ago

Yeah, we run a fairly busy systems on sqlite + litestream. It's not a big deal if they ae down for a bit (never happened though) so they don't need failover and we never had issues (after some sqlite pragma and BUSY code tweaking). Vastly simpler than running + maintaining postgres/mysql. Of course, everything has it's place and we run those too, but just saying that not many people/companies need them really. (Also considering that we see system which DO have postgres/mysql/oracle/mssql set up in HA and still go down for hours do a day per year anyway so what's it all good for).

sachahjkl|3 months ago

back in the day, the hype was all arround postgres, but I agree

tmarice|3 months ago

Very relatable to a recent interview experience I had with a popular freelance platform for the backend developer position.

I never worked at a FAANG-ish company, and in the course of my 10-year career I spent most of my efforts on stopping the organizations from building the wrong thing in the first place, not on "making things scaleable" from the get-go. My view is that if you have product-market fit, you can throw money on the problem for a very, very long time and do just fine, so everyone in the org should focus on achieving PMF as soon as possible.

The question of "How would you scale a Django service to 10M requests per day" came up, and my answer to just scale components vertically and purchase stronger servers obviously was not satisfactory.

contrarian1234|3 months ago

I don't really get this line of argument

Or at least it's not engaging with the obvious counterargument at all - that: "You may not need the scale now, but you may need it later". For a startup being a unicorn with a bajillion users is the only outcome that actually counts as success. It's the outcome they sell to their investors.

So sure, you can make a unscalable solution that works for the current moment. Most likely you wont need more. But that's only true b/c most startups don't end up unicorns. Most likely is you burn through their VC funding and fold

Okay stack overflow allegedly runs on a toaster, but most products don't fit that mold - and now that they're tied to their toaster it probably severely constrains what SO can do it terms of evolving their service

macspoofing|3 months ago

>So sure, you can make a unscalable solution that works for the current moment.

You're making two assumptions - both wrong:

1) That this is an unscalable solution - A monolith app server backed by Postgres can take you very very far. You can vertically scale by throwing more hardware at it, and you can horizontally scale, by just duplicating your monolith server behind a load-balancer.

2) That you actually know where your bottlenecks will be when you actually hit your target scale. When (if) you go from 1000 users to 10,000,000 users, you WILL be re-designing and re-architecting your solution regardless what you started with because at that point, you're going to have a different team, different use-cases, and therefore a different business.

s1mplicissimus|3 months ago

Ironic that the clicking those big buttons only causes a JS error to be logged to console with nothing else happening. That doesn't particularly lend to the authors credibility, although the advice of using simple architecture where possible is correct.

zelphirkalt|3 months ago

Ah I didn't even check for an error. I thought it was a joke, that the buttons do nothing, because what are they gonna do anyway, I am merely reading an article, lol.

littlestymaar|3 months ago

The problem with doing things the sensible way (eschewing microservices and k8s when you work on projects that aren't hyperscale) is that you end up missing opportunities later on because recruiters will filter you because you can't meaningfully respond to the question about “how experienced you are with micro service architecture”. Granted I may have dodged a bullet by not joining a company with 50 engineers that claim to replicate Google's practices (most of which are here to make sure tens of thousands of engineers can work efficiently together), but still someone gets to pay the bill at the end of the month…

zargath|3 months ago

you guys are going to miss the days of over-engineered microservice solutions when you are debugging ai workflows :)

lifestyleguru|3 months ago

It's like debugging the code of that guy who wrote most of the project, was consuming entire coffee in the office, outtalked everyone at the meetings, and then relocated for a new job to Zurich or London.

cube00|3 months ago

Blame the C-suite who approve embedding AWS solution designers into teams.

eddie_catflap|3 months ago

I had a call the other day with a consultancy to potentially pick up some infrastructure work/project type stuff. Asked about timezones involved and they said a lot of their clientele are US based startups. "So it's mainly Kubernetes work" they said.

I personally would suggest the vast majority of those startups do not need Kubernetes and certainly don't need to be paying a consultancy to then pay me to fix their issues for them.

maccard|3 months ago

The problem with kubernetes is that containers just aren't quite enough.

You have an app which runs, now you want to put it in a container somewhere. Great. how do you build that container? Github actions. Great. How does that deploy your app to wherever it's running? Err... docker tag + docker push + ssh + docker pull + docker restart?

You've hit scale. You want redis now. How do you deploy that? Do you want redis, your machine, and your db in thre separate datacenters and to pay egress between all the services? Probably not, so you just want a little redis sidecar container... How does the app get the connection string for it?

When you're into home grown shim scripts which _are_ brittle and error prone, it's messy.K8s is a sledgehammer, but it's a sledgehammer that works. ECS is aws-only, and has its own fair share of warts. Cloud Run/Azure Container Apps are _great_ but there's nothing like those to run on DigitalOcean/Hetzner/whatever. So your choices are to use a big cloud with a simpler orchestartion, or use some sort of non-standard orchestration that you have to manage yourself, or just use k8s...

mike_kamau|3 months ago

I agree 100%. "Complexity is not a virtue. Start simple. Add complexity only when you have proof you need it."

WesolyKubeczek|3 months ago

Heh

Once you have a service that has users and costs actual money, while you don’t need to make it a spaghetti of 100 software products, you need a bit of redundancy at each layer — backend, frontend, databases, background jobs — so that you don’t end up in a catastrophic failure mode each time some piece of software decides to barf.

blkhawk|3 months ago

uh, maybe you only have the issue that you need redundancies because you have so many pieces of software that can barf?

I mean it will happen regardless just from the side effects of complexity. With a simpler system you can at least save on maintenance and overhead.

supermatt|3 months ago

Or build your microservices as a monolith using a “local” async service mesh (no libs or boilerplate needed, its just an async interface for each service) and service namespaced tables in your DB, then just swap in a distributed transport on a per-case basis if you ever need to scale.

lifestyleguru|3 months ago

Are you doing software for money? Because not having Kubernetes in the project will stop you from receiving money. Someone please create with one of these smart AI tools the ultimate killer app: Kubernetes+crypto+AI+blockchain+Angular+Redux+Azure (Working only in Chrome browser).

officialchicken|3 months ago

That's already a preset in claude - use the /reddit-recommends-stack command. It doesn't bother to understand and modify your existing code, just completely rewrites it every time for speed and ease of vibe.

hhh|3 months ago

yeah because kubernetes for most people isn't actually difficult and its complexity is overblown unless you are faang scale

lunias|3 months ago

The damages of micro-services, cloud-scale, and a bunch of enterprise architects that have done nothing for 10 years but read blogs (advertisements) written by other enterprise architects that just got back from watching demos at a conference.

Absolutely spot-on site. Love it.

arbol|3 months ago

Recently, with the AWS outage, our stack of loads of different cloud providers ended up working pretty well! It might be a bit complex running distributed nodes and updating state via API, but its cheap and clearly resilient.

mewpmewp2|3 months ago

The problem is job interviews, where you are expected to know how to scale everything reliably, so it wouldn't be satisfactory to answer that just have a monolith against a postgres instance.

alansaber|3 months ago

As a practitioner we subconciously optimise for "beauty", in maths, physics or dev. Most hackers are self-motivated, by that beauty, not by 40k ork style functional design.

jwr|3 months ago

While I agree with most of this rant, I have a problem with the common "just use postgres" trope, often repeated here.

I recently had to work with SQL again after many years, and was appalled at the incidental complexity and ridiculous limitations. Who in this century would still voluntarily do in-band database commands mixed with user-supplied data? Also, the restrictions on column naming mean that you pretty much have to use some kind of ORM mapping, you can't just store your data. That means an entire layer of code that your application doesn't really need, just to adapt to a non-standard from the 70s.

"just use postgres" is not good advice.

r0x0r007|3 months ago

"just use postgres" is an excellent advice. How about incidental complexity and ridiculous limitations of an ORM? Time spent learning how to use an ORM can better be spent 'refreshing' your SQL knowledge. Also, when you learn how an ORM works, you still don't know proper SQL nor how do databases works, so when you switch language now what, you quickly take a course on another ORM? SQL is a language, ORM is not,it's just ' an entire layer of code that your application doesn't really need' and in some applications you could never ever use an ORM.

zhisme|3 months ago

Totally agree.

Now every system design interview expects you to build some monstrous stack with layers of caching and databases for a hypothetical 1M DAU (daily active users) app.

Mess in the head.

flurdy|3 months ago

12 years on, and a lot of Postgres-based services built since the OP site first went live, I now actually may recommend MongoDB as the sensible option...

remco_sch|3 months ago

I love the unnecessary buttons that do nothing :)

Panzerschrek|3 months ago

Job security-driven development. It explains why some projects are unnecessary complex.

nilsherzig|3 months ago

But that’s half the fun (and knowledge about these systems got me my current job)

admissionsguy|3 months ago

Is there any good reason to switch from mysql to postgres though?

fouc|3 months ago

Funny, from the title I was expecting a productivity-adjacent "What have you even built?" article.

Except it's really a "What over-engineered monstrosity have you built?" in the theme of "choose boring technology"

p.s. MariaDB (MySQL fork) is technically older and more boring than PostgreSQL so they're both equally valid choices. Best choice is ultimately whatever you're most familiar with.

1313ed01|3 months ago

MariaDB from 2009, based on MySQL from 1995.

PostgreSQL from 1996, based on Postgres95 from 1995, based on POSTGRES from 1989, based on INGRES from 1974(?).

I wonder if any lines of 1970's or at least 1980's code still survive in some corner of the PostgeSQL code base or if everything has been rewritten at least once by now? Must have started out in K&R C, if it was even C?

qustrolabe|3 months ago

Job offers require experience in technologies that you won't ever need building solo project. I'm not surprised when those big scale technologies get shoehorned into small project for the sake of learning, showcasing "look I know that one" etc. Only totally missing this point could explain why someone would make this hyperbole rant page

redshiftza|3 months ago

Is this targeted at startup bros with an MVP and a dream ?

In almost any other scenario I feel the author is being intentionally obtuse about much of the reality surrounding technology decisions. An engineer operating a linux box running postgres & redis (or working in an environment with this approach) would become increasingly irrelevant & would certainly earn far less than the engineer operating the other. An engineering department following "complexity is not a virtue" would either struggle to hire or employ engineers considered up-to-date in 2006.

Management & EXCO would also have different incentives, in my limited observations I would say that middle and upper management are incentivised to increase the importance of thier respective verticals either in terms of headcount, budget or tech stack.

Both examples achieve a similar outcome except one is : scalable, fault tolerant, automated and the other is at best a VM at Hetzner that would be swiftly replaced should it have any importance to the org, the main argument here (and in the wild) seems to be "but its haaaard" or "I dont want to keep up with the tech"

KISS has a place and I certainly appreciate it in the software I use and operating systems I prefer but lets take a moment to consider the other folks in the industry who aren't happy to babysit a VM until they retire (or become redundant) before dispensing blanket advice like we are all at a 2018 ted talk . Thanks for coming to my ted talk

stereolambda|3 months ago

While you you're making good points, this shows that engineers and industry intentionally make work more complex than necessary in order to justify higher prices for labor. This is not so uncommon in today's economy, especially white collar and regulated work that most people don't understand, but worth thinking about regardless.

To be fair, it's hard to imagine economy and civilization crashing hard enough to force us to be more efficient. But who knows.

preommr|3 months ago

What the hell have you built? Turns out a pretty straightforward service.

That diagram is just aws, programming language, database. For some reason hadoop I guess. And riak/openstack as redundant.

It just seems like pretty standard stuff with some seemingly small extra parts because that make me think that someone on the team was familiar with something like ruby, so they used that instead of using java.

"Why is Redis talking to MongoDB" It isn't.

"Why do you even use MongoDB" Because that's the only database there, and nosql schemaless solutions are faster to get started... because you don't have to specify a schema. It's not something I would ever choose, but there is a reason for it.

"Let's talk about scale" Let's not, because other than hadoop, these are all valid solutions for projects that don't prioritize scale. Things like a distributed system aren't just about technology, but also data design that aren't that difficult to do and are useful for reasons other thant performance.

"Your deployment strategy" Honestly, even 15 microservices and 8 databases (assuming that it's really 2 databases across multiple envs) aren't that bad. If they are small and can be put on one single server, they can be reproduced for dev/testing purposes without all the networking cruft that devops can spend their time dealing with.

lwn|3 months ago

This comment makes this thread a great time capsule. Given that the website is now over 10 years old, it perfectly illustrates how much 'best practices' and architectural complexity (and cloud bills) have changed since then.

whstl|3 months ago

> Honestly, even 15 microservices and 8 databases (assuming that it's really 2 databases across multiple envs) aren't that bad

Sure, they aren't bad. They're horrible.

wiseowise|3 months ago

> Honestly, even 15 microservices and 8 databases (assuming that it's really 2 databases across multiple envs) aren't that bad.

Clown fiesta.

d--b|3 months ago

Yet the author spent a whole afternoon (hopefully not more!) writing a website to tell some people (who exactly?) that they’re doing it wrong.

fouc|3 months ago

it's a web page on a subdomain of his existing personal website, so probably didn't take him much time at all, probably about as fast as it would take to write up the text in a word document and then farting around with the styling and javascript a bit.

wiseowise|3 months ago

> Yet the author spent a whole afternoon

As opposed to what? Not doing anything at all and participating in this insanity of complexity?

DrewADesign|3 months ago

Yeah, but the building phase of an overly complex system is rarely the big time suck: maintaining and modifying it are.