It's not insane. The best codebase I ever inherited was about 50kloc of C# that ran pretty much everything in the entire company. One web server and one DB server easily handled the ~1000 requests/minute. And the code was way more maintainable than any other nontrivial app I've worked on professionally.
I work(ed) on something similar in Java. And it still works quite well. But last few years are increasingly about getting berated by management on why things are not modern Kubernetes/ micro services based by now.
I feel like people forget just how fast a single machine can be. If your database is SQLite the app will be able to burn down requests faster than you ever thought possible. You can handle much more than 23 req/day.
In the not-too-distant past I was handling many thousands of DB-backed requests per hour in a Python app running plain PostgreSQL.
You can get really, really far with a decent machine if you mind the bottlenecks. Getting into swap? Add RAM. Blocked by IO? Throw in some more NVMe. Any reasonable CPU can process a lot more data than it's popular to think.
It's not. It's kind of bonkers to pursue that when you have a lot of traffic, but it's a perfectly sane starting point until you know where the pain points are.
In general, the vast number of small shops chugging away with a tractably sized monolith aren't really participating in the conversation, just idly wondering what approach they'd take if they suddenly needed to scale up.
I'm not even sure it's bonkers if you have a lot of traffic. It depends on the nature of the traffic and how you define "a lot". In general, though, it's amazing how low latency a function call that can handle passing data back and forth within a memory page or a few cache lines is compared to inter-process communication, let alone network I/O.
The corollary to that is, it's amazing how far you can push vertical scaling if you're mindful of how you use memory. I've seen people scale single-process, single-threaded systems multiple orders of magnitude past the point where many people would say scale-out is an absolute necessity, just by being mindful of things like locality of reference and avoiding unnecessary copying.
if you have 23 requests per day the insane thing is wondering whether or not you've chosen the correct infrastructure, becuase it really doesn't matter.
do whatever you want, you've already spent more time than it's worth considering it.
most productive applications have more RPS than that. we should ideally be speaking about how to architect _productive_ applications and not just mocks and prototypes
Don't know if this is sarcasm or not. If you have 23 req/day, then there's no tech problem to solve. Whatever you have is good enough, and increasing traffic will come from solving problems outside tech (marketing, etc)
default-kramer|3 months ago
geodel|3 months ago
Spivak|3 months ago
kstrauser|3 months ago
You can get really, really far with a decent machine if you mind the bottlenecks. Getting into swap? Add RAM. Blocked by IO? Throw in some more NVMe. Any reasonable CPU can process a lot more data than it's popular to think.
kstrauser|3 months ago
In general, the vast number of small shops chugging away with a tractably sized monolith aren't really participating in the conversation, just idly wondering what approach they'd take if they suddenly needed to scale up.
bunderbunder|3 months ago
The corollary to that is, it's amazing how far you can push vertical scaling if you're mindful of how you use memory. I've seen people scale single-process, single-threaded systems multiple orders of magnitude past the point where many people would say scale-out is an absolute necessity, just by being mindful of things like locality of reference and avoiding unnecessary copying.
notatoad|3 months ago
do whatever you want, you've already spent more time than it's worth considering it.
simianwords|3 months ago
bkanuka|3 months ago