I know it's not in fashion, but I will suggest that renting physical servers is a very good and under-appreciated compromise. As an example, 45€/month gets you a 6-core AMD with 64GB of RAM and NVMe SSDs at Hetzner. That's a lot of computing power!
Virtualized offerings perform significantly worse (see my 2019 experiments: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...) and cost more. The difference is that you can "scale on demand", which I found not to be necessary, at least in my case. And if I do need to scale, I can still do that, it's just that getting new servers takes hours instead of seconds. Well, I don't need to scale in seconds.
In my case, my entire monthly bill for the full production environment and a duplicate staging/standby environment is constant, simple, predictable, very low compared to what I'd need to pay AWS, and I still have a lot of performance headroom to grow.
One thing worth noting is that I treat physical servers just like virtual ones: everything is managed through ansible and I can recreate everything from scratch. In fact, I do use another "devcloud" environment at Digital Ocean, and that one is spun up using terraform, before being passed on to ansible that does the rest of the setup.
I really don't understand why the industry seems to have lost sight of this. It's really common to see super complicated, incredibly expensive, but highly scalable cloud deployments for problems that can be trivially solved with one or two dedicated servers. Even the mere suggestion of renting a dedicated server provokes scorn from devops teams. The overall difference in cost when taking into account all of the complexity, feature-lag and general ceremony must be at least 10x and maybe even closer to 100x. It's madness.
I can use a 4$ VPS for my own personal cloud. I will never pay 45$ for that.
There's a whole band of people who have the technical chops to self-host, or host little instances for their family/friends/association/hiking club. This small margin where you're ok to spend a little extra because you want to make it proper, but you can't justify paying so much and spend time doing hard maintenance. A small VPS, with a shared Nextcloud or a small website, is all that's needed in many cases.
AWS is very cost efficient for other services (S3,SES,SQS, etc) but virtual machines are not a good deal. You get less RAM & CPU, with the virtualization overhead, and pay a lot more money.
Especially for Postgres if you run some tests with pgbench you can really see the penalty you pay for virtualization.
Maybe the sysadmin skill of being able to build your own infrastructure is becoming a lost art, otherwise I can't explain why people are so in love with paying 5x for less performance.
Hetzner is cheap and reliable in Europe, if you're in North America take a look at OVH. Especially their cost-saving alternative called SoYouStart. You can get 4/8 4.5ghz, 64 RAM and an NVME drive for $65.
(I have no affiliation with OVH, I'm just a customer with almost 100 servers, and it's worked out great for me)
I was upset last week when I saw how much our managed Postgres service cost us at work. $800 for the month, it's storing around 32GB of data, and is allocated 4 CPU cores.
Like you, I also run my services from a rented physical server. I used to use Versaweb, but their machines are too old. I didn't previously like Hetzner because I'd heard bad things about them interfering with what you're running.
However, I moved to them in December when my Versaweb instance just died, probably SSD from old age. I'm now paying 50% of what I paid Versaweb, and I can run 6 such Postgres instances.
Then it makes one wonder whether it's worth paying $700 of $800 for a managed service with a fancy cloud UI, automatic upgrades and backups, etc.
For a 1 person show or small startup, I think not. Cheaper to use an available service and dump backups to S3 or something cheaper.
I went the opposite direction at Hetzner with the last round of price hikes. I now use multiple of the Hetzner Cloud instances for my personal projects, for 1/4 of the price (most of the time) or for more if I am messing with something in particular.
Peak performance is certainly worse - but I am not too bothered if something takes longer to run anyway. You are certainly correct on having as much automation in the provisioning of a server, something I did not do with a physical server.
I would actually say just invest in the hardware and count the asset depreciation on taxes. Further, “scaling” horizontally is rather easy if you properly separate functions into different servers. For example, a few really light machines running nginx (with fastcgi cache enabled, because yes) behind an haproxy machine, your PHP/Python/JS/Ruby machines behind your nginx machines, and your DB cluster with TiDB or something behind that. You’ve removed the overhead of the container systems and the overhead of the virtualization platform. You’re no longer sharing CPU time with anyone. You’re not experiencing as many context switches or interrupts. The cost is all upfront though. You will still pay for bandwidth and power, but over time your cost should be lower.
The main issue in any scenario involving real hardware is that you need staff who are competent in both hardware and Linux/UNIX systems. Many claim to be on their resumes and then cannot perform once on the job (in my experience anyway). In my opinion, one of the major reasons for the explosion of the cloud world was precisely the difficulty in building and the financial cost of building such teams. Additionally, there is a somewhat natural (and necessary) friction between application developers and systems folks. The systems folks should always pushing back and arguing for more security, more process, and fewer deployments. The dev team should always be arguing for more flexibility, more releases, and less process. Good management should then strike the middle path between the two. Unfortunately, incompetent managers have often just decided to get rid of systems people and move things into AWS land.
Finally, I would just note that cloud architecture is bad for the planet as it requires over-provisioning by cloud providers, and it requires more computer power overall due to the many layers of abstraction. While anyone project is responsible for little of this waste, the entire global cloud as an aggregate is very wasteful. This bothers me and obviously likely factors as an emotional bias in my views (so large amounts of salt for all of the above).
I do exactly this, using Hetzner as well. I was managing some side projects and self-hosting and the bill just seemed to creep up because the VPS's we never power enough to host. I started feeling the need to add more VPS's and then I started shopping around. In the end I got a similar deal and specs. I can do anything I want with it now and even with quite a few self hosted services and projects I'm still running at only about 10-15% capacity.
If I want to spin up a new project or try out hosting something new it takes a couple minutes and I've got the scripts. Deployments are fast, maintenance is low, and I have far more for my money.
For anyone who's interested this is the rough cut of what I'm using:
* Ansible to manage everything
* A tiny bit of terraform for some DNS entries which I may replace one day
* restic for backups, again controlled by ansible
* tailscale for vpn (I have some pi's running at home, nothing major but tailscale makes it easy and secure)
* docker-compose for pretty much everything else
In 2015 I worked in a project where really big servers (lots of RAM, fast SSDs) where needed for large database. The client had preferred AWS, but the monthly bill would have been something like 30K Euros. So, they went with Hetzner for a few hundred bucks a month ...
Don't discount your local colo, either. I pay $75/month for 2U, a gigabit Ethernet link to a router two hops from the metro SONET ring in Albany, NY, and 1 mbps 95th percentile bandwidth. I've got a router, a switch, and a 1U 32-core AMD Bulldozer box in there hosting VMs (it's past due for replacement but running fine).
Yes, you're supporting your own hardware at that point. No, it's not a huge headache.
And with that computing power it's easy to install qemu-kvm and virtualise your own servers which is more scalable (and easier to move when the hardware you're renting becomes redundant) than having one or two monolithic servers with every conceivable piece of software installed, conflicting dependencies, etc.
The biggest additional cost to this is renting more IPv4 addresses, which Hetzner charge handsomely for now that there are so few available.
Whatever you create, will start with 0 users, and an entire real machine is completely overkill for that 0 load you will get. You upgrade your VPS into a pair of real machines, then into a small rented cluster, and then into a datacenter (if somebody doesn't undercut that one). All of those have predictable bills and top performance for their price.
I actually agree with you, it's just a little bit more expensive. An under-appreciated thing with dedicated servers is that they often come with very solid network bandwith, which really helps for use cases like streaming audio/video.
the cloud is the golden cage. companies, and people, got sold on comfort and ease of use whilst trapping themselves into vendor lock-in environment that is the california hotel. when they realize the problem, they are too deep in the tech and rewriting their codebase would be too complex or expensive so they bite the expense and keep on doing what they are doing. constantly increasing their dependency and costs and never be able to leave.
as you pointed out, bare metal is the way to go. is works the opposite of cloud - a bit more work at the beginning but a way lot less of expenses at the end.
In general I agree that physical servers are great, but I think it's important to note that for most people a $4/month VPS is more than enough. So actually 45€/month would be overkill in that case.
For 5 EUR / month you can also get a dedicated server (not a VPS) from OVH.
Sure it's only an ATOM N2800 with 4 GB of RAM / 1 TB SSD / 100 Mbit/s bandwith (which is definitely the bottleneck as I've got gigabit fiber to the home).
But it's 5 EUR / month for a dedicated server (and it's got free OVH DDoS protection too as they offer it on every single one of their servers).
I set up SSH login on these using FIDO/U2F security key only (no password, no software public/private keys: I only allow physical security key logins). I only allow SSH in from the CIDR blocks of the ISPs I know I'll only ever reasonably be login from and just DROP all other incoming traffic to the SSH port. This keeps the logs pristine.
Nice little pet these are.
I'm not recommending these 5 EUR / month servers for production systems but they're quite capable compared to their price.
I’ve recently started deploying on Cloudflare workers.
They’re cheap and “infinitely scalable.” I originally picked them for my CRUD API because I didn’t want to have to worry about scaling. I’ve built/managed an internal serverless platform at FAANG and, after seeing inside the sausage factory, I just wanted to focus on product this time around.
But I’ve noticed something interesting/awesome about my change in searches while working on product. I no longer search for things like “securely configuring ssh,” “setting up a bastion,” “securing a Postgres deployment,” or “2022 NGinx SSL configuration” - an entire class of sysadmin and security problems just go away when picking workers with D1. I sleep better knowing my security and operations footprint is reduced and straightforward to reason about. I can use all those extra cycles to focus on building.
I can’t see the ROI of managing a full Linux stack on an R620 plugged into a server rack vs. Workers when you factor in the cost of engineering time to maintain the former.
I do think this is a new world though. AWS doesn’t compare. I’d pick my R620s plugged into a server rack over giving AWS my credit card any day. AWS architectures are easy to bloat and get expensive fast - both in engineering cost and bills.
I have worked with several small clients to migrate away from AWS/Azure instances onto dedicated hardware from Hetzner or IBM "Bare Metal" hardware.
The question I ask first is: as a company, what is an acceptable downtime per year?
I give some napkin calculated figures for 95%, 99%, 99.9% and 99.99% to show how both cost and complexity can skyrocket when chasing 9s.
They soon realise that a pair of live/standby servers might be more than suitable for their business needs at that particular time (and for the foreseeable future).
There is an untapped market of clients moving _away_ from the cloud.
I've been running my company website for years on $5 Linode. I used to host everything on there (downloads, update checking, crash reporting, licensing, a Postgres database for everything).
I've never had any performance issues. A $5 VPS is plenty for Apache, PHP, PostgreSQL, for a few thousand users a day.
I've started using multiple VPS, one for each service. Not for performance reasons, but for two things:
- isolation: if there's a problem with one service (eg. logs used up all disk space) it doesn't bring everything down at once
- maintainability: it's easier to upgrade services one by one than all at once
Does anyone here remember developing applications on machines with 25MHz of CPU and 8MB of memory? That VPC has probably 1GHz CPU and 1GB of memory.
How you develop an application depends completely on what you have available to you and what its use case is. If you don't have money, design it to be resource-efficient. If you do have money, design it to be a resource pig. If it needs to be high performance, design it to be very efficient. If it doesn't need to be high performance, just slap something together.
As a developer, you should know how to design highly efficient apps, and highly performant apps, and how to develop quick and dirty, and how to design for scalability, depending on the situation. It's like being a construction worker: you're going to work on very different kinds of buildings in your career, so learn different techniques when you can.
I highly recommend, for fun, trying to develop some apps inside a VM with very limited resources. It's pretty neat to discover what the bottlenecks are and how to get around them. You may even learn more about networking, file i/o, virtual memory allocation, CoW, threading, etc. (I wouldn't use a container to start, as there's hidden performance issues that may be distraction)
The article does not really answer the question in any meaningful way, just tests a CRUD blogging server written in go, using a mongodb database (both dockerized…)
If you expect any comprehensive benchmarks or testing, save the time.
I think the biggest thing that snipes a lot of technology teams is some notion that production can never ever go down no matter what. Every byte must be synchronously replicated to 2+ cross-cloud regions, etc. Not a single customer can ever become impacted by a hacker, DDOS, or other attack.
Anyone in this industry is prone to these absolutist ideologies. I wasted a half-decade chasing perfection myself. In reality, there are very few real world systems that cannot go down. One example of a "cannot fail" I'd provide is debit & credit processing networks. The DoD operates most of the other examples.
The most skilled developer will look at a 100% uptime guarantee, laugh for a few moments, and then spin up an email to the customer in hopes of better understanding the nature of their business. We've been able to negotiate a substantially smaller operational footprint with all of our customers by being realistic with the nature and impact of failure.
If you can negotiate to operate your product on a single VM (ideally with the database being hosted on the same box), then you should absolutely do this and take the win. Even if you think you'll have to rewrite due to scale in the future, this will get you to the future.
Periodic, crash-consistent snapshots of block storage devices is a completely valid backup option. Many times it is perfectly OK to lose data. In most cases, you will need to reach a small compromise with the business owner where you develop an actual product feature to compensate for failure modes. An example of this for us would be emailing of important items to a special mailbox for recovery from a back-office perspective. The amount of time it took to develop this small product feature is not even .01% of the amount of time it would have taken to develop a multi-cloud, explosion-proof product.
Putting my recommendation in for vultr - have used them for many years and have had very good results off of a cheap vps. Also trivial to migrate and upgrade hosts on the fly.
Someone had a site set up to measure VPS providers by running a suite of tests every hour and collecting the results by hosting provider. Was surprising to see transient performance degradations, downtimes and stark differences in performance for "2 vcpu 1gb ram" depending on the hardware underneath and level of overprovisioning.
I used Johnscompanies, Linode, DigitalOcean, then Vultr. Vultr shocked me by how poor their customer service was, after 3 days of downtime, multiple claims of fixing the issue soon, and not bothering to notify me when it finally was fixed. I didn't experience that at any of my previous hosts.
I dislike Vultr based on their deceptive marketing around the very cheap 2.5 and 3.5$ instances they (used to?) list on their website. Usually those are only available in one or two sites, with no way of checking before creating an account and loading up the minimum amount (10$ when I tried).
Does anyone have tips for managing/monitoring a few cheap VPS? I have one that I pay $4 a year for and only use it as a Syncthing middle between my laptop and phone. I also have a few other small ones that I use for single purposes. However, I don't have a good way to see how much storage is on each VPS and the CPU utilization without sshing into them to check.
I would use telegraf (https://github.com/influxdata/telegraf) to gather the metrics you want from your servers. It has built-in functions to get metrics like disk usage, cpu, etc...
From there I would export those metrics to a grafana+influxdb setup. But honestly this is because that's what I'm used to professionally. There might be simpler solutions around.
If you're happy not having a home-grown open source solution, New Relic is essentially free if you don't have many servers and turn off extended metrics. If you start adding in more integrations, it's gonna cost you, but for basic monitoring and nice graphs hosted externally from your systems itself, it works nicely.
https://opsverse.io/observenow-observability/ ...As close to free as possible since the cost is primarily driven by amount of ingestion. Works great for small setups with scope to grow over time!
The part I hate most is the security aspects. Keeping things up to date, port blocking, iptables, firewalls, etc. Anyone know if there’s a SaaS that just ssh’s in and does that stuff? I use serverpilot.io but it’s aimed specifically at DO + PHP, and would like more flexibility on which core services could be installed.
Hosting stuff on small machines was why I came up with https://github.com/piku, and I still use it for those - I spent a long time trying to cram LXC, Docker and the like into single-core machines, and wanted a way to make it as painless as possible.
These days I’m running my static site builder, a few scrapers/RSS converters and a number of Mastodon-related services on it, on various kinds of cloud and physical hardware…
Vertical scaling is seriously underrated nowadays. Also everyone chases those 99.99999 availability, but very few actually need it, so scaling vertically is not a problem for 99.9% startups.
These test results are surprising. In my experience, when the server cpu is pegged, it causes latency to shoot through the roof unless the number of parallel requests is finely tuned.
In this case, there are max 50 workers hitting the server, so you’d expect 50 parallel requests to be outstanding at once. 1300 req/sec with 50 workers, would be 26 msec/req, which matched the results.
So I wonder why the server being pegged didn’t affect things more? Super curious what the server side metrics were during the test.
And you can do a LOT using such VMs, now that most are hosted on SSDs instead of spinning disks.
My take-away points are the following:
1) Beware of cheap OpenVZ offers (e.g. on LEB or WHT), performance is usually worse than offers with proper virtualization like KVM, and the need to patch OpenVZ into the kernel causes most offerings to use a more or less outdated Linux Kernel leading to a very questionable level of security.
2) If your VM hosts "serious" data, you should better make sure to do your research and use a reputable hosting provider. This may potentially cost a bit more but will save you a lot of headache in the future.
3) Unless it's just a toy project, you should look into enabling replication of your data across two or three different VPS providers. While this at most triples your performance, the reliability will increase at least tenfold.
Personally for me the $5 ones no longer work. Nightly "dnf update" gobbles up >1GB of memory for God knows why and kills off httpd or application server randomly.
But a ~$10 virtual box with 2GB of RAM works fine. Nothing to complain. I get 2TB transfer and 50 GB space.
I was running one of my products on 7 5$ instances from DigitalOcean. Which includes load balancer (Apache), database with replication, static site hosting, web app and 2 backends. It was happily serving hundreds of concurrent users with quite a lot of space to still grow. With their s3 clone and few volumes we rarely paid more than 45$ per month. You really can get far with cheap cloud machines.
[+] [-] jwr|3 years ago|reply
Virtualized offerings perform significantly worse (see my 2019 experiments: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...) and cost more. The difference is that you can "scale on demand", which I found not to be necessary, at least in my case. And if I do need to scale, I can still do that, it's just that getting new servers takes hours instead of seconds. Well, I don't need to scale in seconds.
In my case, my entire monthly bill for the full production environment and a duplicate staging/standby environment is constant, simple, predictable, very low compared to what I'd need to pay AWS, and I still have a lot of performance headroom to grow.
One thing worth noting is that I treat physical servers just like virtual ones: everything is managed through ansible and I can recreate everything from scratch. In fact, I do use another "devcloud" environment at Digital Ocean, and that one is spun up using terraform, before being passed on to ansible that does the rest of the setup.
[+] [-] phpnode|3 years ago|reply
[+] [-] rakoo|3 years ago|reply
There's a whole band of people who have the technical chops to self-host, or host little instances for their family/friends/association/hiking club. This small margin where you're ok to spend a little extra because you want to make it proper, but you can't justify paying so much and spend time doing hard maintenance. A small VPS, with a shared Nextcloud or a small website, is all that's needed in many cases.
[+] [-] jayski|3 years ago|reply
AWS is very cost efficient for other services (S3,SES,SQS, etc) but virtual machines are not a good deal. You get less RAM & CPU, with the virtualization overhead, and pay a lot more money.
Especially for Postgres if you run some tests with pgbench you can really see the penalty you pay for virtualization.
Maybe the sysadmin skill of being able to build your own infrastructure is becoming a lost art, otherwise I can't explain why people are so in love with paying 5x for less performance.
Hetzner is cheap and reliable in Europe, if you're in North America take a look at OVH. Especially their cost-saving alternative called SoYouStart. You can get 4/8 4.5ghz, 64 RAM and an NVME drive for $65.
(I have no affiliation with OVH, I'm just a customer with almost 100 servers, and it's worked out great for me)
[+] [-] nevi-me|3 years ago|reply
Like you, I also run my services from a rented physical server. I used to use Versaweb, but their machines are too old. I didn't previously like Hetzner because I'd heard bad things about them interfering with what you're running. However, I moved to them in December when my Versaweb instance just died, probably SSD from old age. I'm now paying 50% of what I paid Versaweb, and I can run 6 such Postgres instances.
Then it makes one wonder whether it's worth paying $700 of $800 for a managed service with a fancy cloud UI, automatic upgrades and backups, etc. For a 1 person show or small startup, I think not. Cheaper to use an available service and dump backups to S3 or something cheaper.
[+] [-] shapefrog|3 years ago|reply
Peak performance is certainly worse - but I am not too bothered if something takes longer to run anyway. You are certainly correct on having as much automation in the provisioning of a server, something I did not do with a physical server.
[+] [-] BirAdam|3 years ago|reply
The main issue in any scenario involving real hardware is that you need staff who are competent in both hardware and Linux/UNIX systems. Many claim to be on their resumes and then cannot perform once on the job (in my experience anyway). In my opinion, one of the major reasons for the explosion of the cloud world was precisely the difficulty in building and the financial cost of building such teams. Additionally, there is a somewhat natural (and necessary) friction between application developers and systems folks. The systems folks should always pushing back and arguing for more security, more process, and fewer deployments. The dev team should always be arguing for more flexibility, more releases, and less process. Good management should then strike the middle path between the two. Unfortunately, incompetent managers have often just decided to get rid of systems people and move things into AWS land.
Finally, I would just note that cloud architecture is bad for the planet as it requires over-provisioning by cloud providers, and it requires more computer power overall due to the many layers of abstraction. While anyone project is responsible for little of this waste, the entire global cloud as an aggregate is very wasteful. This bothers me and obviously likely factors as an emotional bias in my views (so large amounts of salt for all of the above).
[+] [-] bluehatbrit|3 years ago|reply
If I want to spin up a new project or try out hosting something new it takes a couple minutes and I've got the scripts. Deployments are fast, maintenance is low, and I have far more for my money.
For anyone who's interested this is the rough cut of what I'm using: * Ansible to manage everything * A tiny bit of terraform for some DNS entries which I may replace one day * restic for backups, again controlled by ansible * tailscale for vpn (I have some pi's running at home, nothing major but tailscale makes it easy and secure) * docker-compose for pretty much everything else
[+] [-] cryptos|3 years ago|reply
[+] [-] limteary|3 years ago|reply
I've seen lots of less experienced people overpay for hetzner and similar when a $5-10 vps would've worked.
[+] [-] systems_glitch|3 years ago|reply
Yes, you're supporting your own hardware at that point. No, it's not a huge headache.
[+] [-] dm|3 years ago|reply
The biggest additional cost to this is renting more IPv4 addresses, which Hetzner charge handsomely for now that there are so few available.
[+] [-] marcosdumay|3 years ago|reply
Whatever you create, will start with 0 users, and an entire real machine is completely overkill for that 0 load you will get. You upgrade your VPS into a pair of real machines, then into a small rented cluster, and then into a datacenter (if somebody doesn't undercut that one). All of those have predictable bills and top performance for their price.
[+] [-] zer0tonin|3 years ago|reply
[+] [-] hknmtt|3 years ago|reply
as you pointed out, bare metal is the way to go. is works the opposite of cloud - a bit more work at the beginning but a way lot less of expenses at the end.
[+] [-] dom96|3 years ago|reply
[+] [-] danjac|3 years ago|reply
[+] [-] JaggerJo|3 years ago|reply
Setting up and managing Postgres is a pain tho. Would be nice to have a simpler way of getting this all right.
[+] [-] TacticalCoder|3 years ago|reply
Sure it's only an ATOM N2800 with 4 GB of RAM / 1 TB SSD / 100 Mbit/s bandwith (which is definitely the bottleneck as I've got gigabit fiber to the home).
But it's 5 EUR / month for a dedicated server (and it's got free OVH DDoS protection too as they offer it on every single one of their servers).
I set up SSH login on these using FIDO/U2F security key only (no password, no software public/private keys: I only allow physical security key logins). I only allow SSH in from the CIDR blocks of the ISPs I know I'll only ever reasonably be login from and just DROP all other incoming traffic to the SSH port. This keeps the logs pristine.
Nice little pet these are.
I'm not recommending these 5 EUR / month servers for production systems but they're quite capable compared to their price.
[+] [-] r3trohack3r|3 years ago|reply
They’re cheap and “infinitely scalable.” I originally picked them for my CRUD API because I didn’t want to have to worry about scaling. I’ve built/managed an internal serverless platform at FAANG and, after seeing inside the sausage factory, I just wanted to focus on product this time around.
But I’ve noticed something interesting/awesome about my change in searches while working on product. I no longer search for things like “securely configuring ssh,” “setting up a bastion,” “securing a Postgres deployment,” or “2022 NGinx SSL configuration” - an entire class of sysadmin and security problems just go away when picking workers with D1. I sleep better knowing my security and operations footprint is reduced and straightforward to reason about. I can use all those extra cycles to focus on building.
I can’t see the ROI of managing a full Linux stack on an R620 plugged into a server rack vs. Workers when you factor in the cost of engineering time to maintain the former.
I do think this is a new world though. AWS doesn’t compare. I’d pick my R620s plugged into a server rack over giving AWS my credit card any day. AWS architectures are easy to bloat and get expensive fast - both in engineering cost and bills.
[+] [-] comprev|3 years ago|reply
The question I ask first is: as a company, what is an acceptable downtime per year?
I give some napkin calculated figures for 95%, 99%, 99.9% and 99.99% to show how both cost and complexity can skyrocket when chasing 9s.
They soon realise that a pair of live/standby servers might be more than suitable for their business needs at that particular time (and for the foreseeable future).
There is an untapped market of clients moving _away_ from the cloud.
[+] [-] newaccount74|3 years ago|reply
I've never had any performance issues. A $5 VPS is plenty for Apache, PHP, PostgreSQL, for a few thousand users a day.
I've started using multiple VPS, one for each service. Not for performance reasons, but for two things:
- isolation: if there's a problem with one service (eg. logs used up all disk space) it doesn't bring everything down at once
- maintainability: it's easier to upgrade services one by one than all at once
[+] [-] throwawaaarrgh|3 years ago|reply
How you develop an application depends completely on what you have available to you and what its use case is. If you don't have money, design it to be resource-efficient. If you do have money, design it to be a resource pig. If it needs to be high performance, design it to be very efficient. If it doesn't need to be high performance, just slap something together.
As a developer, you should know how to design highly efficient apps, and highly performant apps, and how to develop quick and dirty, and how to design for scalability, depending on the situation. It's like being a construction worker: you're going to work on very different kinds of buildings in your career, so learn different techniques when you can.
I highly recommend, for fun, trying to develop some apps inside a VM with very limited resources. It's pretty neat to discover what the bottlenecks are and how to get around them. You may even learn more about networking, file i/o, virtual memory allocation, CoW, threading, etc. (I wouldn't use a container to start, as there's hidden performance issues that may be distraction)
[+] [-] uvesten|3 years ago|reply
If you expect any comprehensive benchmarks or testing, save the time.
[+] [-] bob1029|3 years ago|reply
Anyone in this industry is prone to these absolutist ideologies. I wasted a half-decade chasing perfection myself. In reality, there are very few real world systems that cannot go down. One example of a "cannot fail" I'd provide is debit & credit processing networks. The DoD operates most of the other examples.
The most skilled developer will look at a 100% uptime guarantee, laugh for a few moments, and then spin up an email to the customer in hopes of better understanding the nature of their business. We've been able to negotiate a substantially smaller operational footprint with all of our customers by being realistic with the nature and impact of failure.
If you can negotiate to operate your product on a single VM (ideally with the database being hosted on the same box), then you should absolutely do this and take the win. Even if you think you'll have to rewrite due to scale in the future, this will get you to the future.
Periodic, crash-consistent snapshots of block storage devices is a completely valid backup option. Many times it is perfectly OK to lose data. In most cases, you will need to reach a small compromise with the business owner where you develop an actual product feature to compensate for failure modes. An example of this for us would be emailing of important items to a special mailbox for recovery from a back-office perspective. The amount of time it took to develop this small product feature is not even .01% of the amount of time it would have taken to develop a multi-cloud, explosion-proof product.
[+] [-] tgtweak|3 years ago|reply
Someone had a site set up to measure VPS providers by running a suite of tests every hour and collecting the results by hosting provider. Was surprising to see transient performance degradations, downtimes and stark differences in performance for "2 vcpu 1gb ram" depending on the hardware underneath and level of overprovisioning.
Edit: the aptly named https://www.vpsbenchmarks.com/screener
[+] [-] hereforphone|3 years ago|reply
[+] [-] RealStickman_|3 years ago|reply
They did give me my money back when I asked.
https://github.com/joedicastro/vps-comparison/issues/27
[+] [-] bdcravens|3 years ago|reply
[+] [-] megous|3 years ago|reply
[+] [-] pella|3 years ago|reply
Hetzner https://www.hetzner.com/cloud
Scaleway https://www.scaleway.com/en/pricing/?tags=compute
[+] [-] raybb|3 years ago|reply
[+] [-] zer0tonin|3 years ago|reply
From there I would export those metrics to a grafana+influxdb setup. But honestly this is because that's what I'm used to professionally. There might be simpler solutions around.
[+] [-] t_sawyer|3 years ago|reply
[+] [-] dm|3 years ago|reply
[+] [-] m-o11y|3 years ago|reply
Disclaimer: I work there
[+] [-] megadeth|3 years ago|reply
[+] [-] kenniskrag|3 years ago|reply
https://cockpit-project.org/running
[+] [-] discordance|3 years ago|reply
[+] [-] hu3|3 years ago|reply
I could use one for things like remote cronjobs.
[+] [-] susadmin|3 years ago|reply
[+] [-] conductr|3 years ago|reply
[+] [-] rcarmo|3 years ago|reply
These days I’m running my static site builder, a few scrapers/RSS converters and a number of Mastodon-related services on it, on various kinds of cloud and physical hardware…
[+] [-] kissgyorgy|3 years ago|reply
[+] [-] Tepix|3 years ago|reply
[+] [-] ec109685|3 years ago|reply
In this case, there are max 50 workers hitting the server, so you’d expect 50 parallel requests to be outstanding at once. 1300 req/sec with 50 workers, would be 26 msec/req, which matched the results.
So I wonder why the server being pegged didn’t affect things more? Super curious what the server side metrics were during the test.
[+] [-] Gregoriy|3 years ago|reply
[+] [-] sys42590|3 years ago|reply
And you can do a LOT using such VMs, now that most are hosted on SSDs instead of spinning disks.
My take-away points are the following:
1) Beware of cheap OpenVZ offers (e.g. on LEB or WHT), performance is usually worse than offers with proper virtualization like KVM, and the need to patch OpenVZ into the kernel causes most offerings to use a more or less outdated Linux Kernel leading to a very questionable level of security.
2) If your VM hosts "serious" data, you should better make sure to do your research and use a reputable hosting provider. This may potentially cost a bit more but will save you a lot of headache in the future.
3) Unless it's just a toy project, you should look into enabling replication of your data across two or three different VPS providers. While this at most triples your performance, the reliability will increase at least tenfold.
[+] [-] habibur|3 years ago|reply
But a ~$10 virtual box with 2GB of RAM works fine. Nothing to complain. I get 2TB transfer and 50 GB space.
[+] [-] szastamasta|3 years ago|reply