top | item 44410571

(no title)

justcool393 | 8 months ago

i think that people vastly overstate the costs of this sort of thing and it's super bizarre. if you're treating this as a big official corporation™ and such and want to pay 500 devs like $200k/year or something to make work, then yeah you're gonna have problems.

but if you want to build a social network and aren't dreaming of being gazillionaires for it (which is quite reasonable), then you can get by very easily. how do I know this? because... well it's being done successfully. not was done successfully, is done successfully.

you can probably even get people to help out on it.

you can build a social network with a dedi running nginx hosting your Python application running on a Linux box backed against Postgres (and redis for session storage, although even that is a bit overkill) for like $80/month deployed with a "deploy.sh" script that you run to kick the damn thing into running (Docker is used in dev only, but could easily work here). should you probably add health checks or whatever? yeah. it still works really well.

this scales well past the 100k users mark.

what about video/images/etc? well, this nginx server happily sends out user uploaded video storing them as files on a bog standard ext4 filesystem. backups exist of the site.

the "stack" i mentioned here isn't fancy or particularly tightly optimized, it's in fact pessimized in a lot of ways. hell I know there were a gazillion ways we could improve performance of our application. show the backend app to a game dev and they'd probably want to start strangling people with how poorly optimized most of the actual app is.

and still, it scales well.

again, I stress that this isn't some theoretical idea, this is actively being executed. the entire venture makes money for the team from the users who willingly (and unforcibly in order to use the service, the actual site is free to use in its full form) give money. this isn't ZFS. this isn't Rust. this isn't using some blue-green deployment. this isn't spending hours toiling away at which sysctl to set to squeeze every last cycle out of each box. this isn't behind some massive CDN with "internet scale" boxen or even (for the video serving part) behind any anti-DDoS service.

it's just a matter of doing actual engineering and being willing to actually build the things you want to build.

discuss

order

mystifyingpoi|8 months ago

I like stories like these, but I think you just never hit a breaking point with the infra and approaches you got. You've never exceeded your ext4 volume size, so no need for object storage. You've never had a server die, so one dedi box is fine. You've never had a paying customer call you with an issue, so oncall support is not needed.

So I totally agree with your approach.

justcool393|8 months ago

yeah, i mean i guess what i'm trying to say is that the breaking point is very far up there as computers have gotten towards breakneck speeds, especially on the technology side, for the goal being achieved. it's downright difficult to hit the limits unless you're throwing effectively a DDoS at it.

i think the big thing though is that it's a community and so people are actually willing to support that even if it means the amount of 9s of availability is slightly fewer (although in practice, many providers bust right through their "9s" SLAs without a care in the world) and given a migration from a VM provider to the dedi occurred, migrations obviously can happen if failure presents itself.