This already happened. Hetzner, OVH, and countless other local cloud companies exist. It is only the path of least resistancd and market inertia, that stops companies from switching.
I run on Hetzner and am saving big bucks compared to the ridiculously high priced AWS.
> I run on Hetzner and am saving big bucks compared to the ridiculously high priced AWS.
IMO even Americans should take a look at whether they need to be using the big cloud providers or not. They're so much more expensive compared to smaller hosts like Hetzner, Digital Ocean, Vultr, and so on. It depends on what you're doing, of course, but I'm American and moved off of Azure last year due to the price and the complexity it encourages.
Reposting my comment from another thread on the same topic a few days ago:
> This is why I moved off of Azure and over to Hetzner's US VPS's. For what I was deploying (a few dozen websites, some relatively complex .NET web apps, some automated scripts, etc.), the pricing on Azure just wasn't competitive. But worse for me was the complexity; I found that using Azure encouraged me to introduce more and more complex deployment pipelines, when all I really needed was Build the container -> SCP it into a blue/green deployment scheme on a VPS -> flip a switch after testing it.
Comparing EU cloud providers to AWS is like comparing a 1963 Zastava to 2025 high end BYD because both of them are cars and can drive from point A to point B.
Well, if the Zastava had 5-10x the amount of horsepower and storage space of the BYD for the same amount of money. Because that’s what is often the reality. Bare metal is unreasonably efficient compared to cloud services for not that much more know-how.
I do tech DD work for investment funds etc and one thing I often see are slow, complex and expensive AWS-heavy architectures that optimize for problems the company doesn’t have and often will never have. In theory to ensure stability and scalability. They are usually expensive and have nightmarish configuration complexity.
In practice complexity tends to lead to more outages and performance issues than if you had a much simpler (rented) bare metal setup with some spare capacity and better architecture design. More than half of serious outages I have seen documented in these reviews came from configuration mistakes or bugs in software that is supposed to manage your resources.
Nevermind that companies invest serious amounts of time in trying to manage complexity rather than remove it.
A few years ago I worked for a company that had two competing systems. One used AWS sparingly: just EC2, S3, RDS and load balancers. The other went berserk in the AWS candy shop and was this monstrosity that used 20-something different AWS services glued together by lambdas. This was touted as “the future”, and everyone who didn’t think it was a good idea was an idiot.
The simple solution cost about the same to run for a few thousand (business customers) as the complex one cost for ONE customer. The simple solution cost about 1/20 to develop. It also had about 1/2500 the latency on average because it wasn’t constantly enqueuing and dequeueing data through a slow SQS maze of queues.
And best of all: you could move the simpler solution to bare metal servers. In fact, we ran all the testing on clusters of 6 RPIs. The complex solution was stuck in AWS forever.
when you compare IT stuff to cars, the discussion pivots to discussing cars, please think twice before using any analogies / comparisons with the physical world
Except 95% of companies have no need of ultra scalable super cloud.
If you are a very big SaaS company that is not Google or Apple, you are probably serving hundreds of thousands, maybe millions of unique users. AWS may be convenient, but you don't /need/ it, you can build an infrastructure that will handle such workload with any of the big european providers.
You'll just lose in comfort what you'll gain in data sovereignty and infrastructure costs.
I worked for a 7M€ MRR company that had maybe a million of users who used the software every day. The thing ran on a dozen of OVH servers, including multi-site redundancy.
Scaleway (maybe upcloud as well) are also great and atleast Scaleway from what I know has many many features and its really competitive with the offerings it provides in general and has many offerings.
nozzlegear|29 days ago
IMO even Americans should take a look at whether they need to be using the big cloud providers or not. They're so much more expensive compared to smaller hosts like Hetzner, Digital Ocean, Vultr, and so on. It depends on what you're doing, of course, but I'm American and moved off of Azure last year due to the price and the complexity it encourages.
Reposting my comment from another thread on the same topic a few days ago:
> This is why I moved off of Azure and over to Hetzner's US VPS's. For what I was deploying (a few dozen websites, some relatively complex .NET web apps, some automated scripts, etc.), the pricing on Azure just wasn't competitive. But worse for me was the complexity; I found that using Azure encouraged me to introduce more and more complex deployment pipelines, when all I really needed was Build the container -> SCP it into a blue/green deployment scheme on a VPS -> flip a switch after testing it.
atmosx|29 days ago
bborud|29 days ago
I do tech DD work for investment funds etc and one thing I often see are slow, complex and expensive AWS-heavy architectures that optimize for problems the company doesn’t have and often will never have. In theory to ensure stability and scalability. They are usually expensive and have nightmarish configuration complexity.
In practice complexity tends to lead to more outages and performance issues than if you had a much simpler (rented) bare metal setup with some spare capacity and better architecture design. More than half of serious outages I have seen documented in these reviews came from configuration mistakes or bugs in software that is supposed to manage your resources.
Nevermind that companies invest serious amounts of time in trying to manage complexity rather than remove it.
A few years ago I worked for a company that had two competing systems. One used AWS sparingly: just EC2, S3, RDS and load balancers. The other went berserk in the AWS candy shop and was this monstrosity that used 20-something different AWS services glued together by lambdas. This was touted as “the future”, and everyone who didn’t think it was a good idea was an idiot.
The simple solution cost about the same to run for a few thousand (business customers) as the complex one cost for ONE customer. The simple solution cost about 1/20 to develop. It also had about 1/2500 the latency on average because it wasn’t constantly enqueuing and dequeueing data through a slow SQS maze of queues.
And best of all: you could move the simpler solution to bare metal servers. In fact, we ran all the testing on clusters of 6 RPIs. The complex solution was stuck in AWS forever.
RobotToaster|29 days ago
tryauuum|29 days ago
niemandhier|29 days ago
I want a 1985 Mercedes that is build like a tank and outlives me.
shaky-carrousel|29 days ago
pjerem|29 days ago
If you are a very big SaaS company that is not Google or Apple, you are probably serving hundreds of thousands, maybe millions of unique users. AWS may be convenient, but you don't /need/ it, you can build an infrastructure that will handle such workload with any of the big european providers.
You'll just lose in comfort what you'll gain in data sovereignty and infrastructure costs.
I worked for a 7M€ MRR company that had maybe a million of users who used the software every day. The thing ran on a dozen of OVH servers, including multi-site redundancy.
Imustaskforhelp|29 days ago
Your point's a little moot.
elygre|29 days ago
The basic services are more or less the same, but the hyperscalers provide hundreds of services where smaller providers have only ten.
unknown|29 days ago
[deleted]
xyst|29 days ago
tossandthrow|29 days ago
Computing at this scale is not marketed to flashy fanbois.