(no title)
adamcharnock | 26 days ago
1 - Cloud – This is minimising cap-ex, hiring, and risk, while largely maximising operational costs (its expensive) and cost variability (usage based).
2 - Managed Private Cloud - What we do. Still minimal-to-no cap-ex, hiring, risk, and medium-sized operational cost (around 50% cheaper than AWS et al). We rent or colocate bare metal, manage it for you, handle software deployments, deploy only open-source, etc. Only really makes sense above €$5k/month spend.
3 - Rented Bare Metal – Let someone else handle the hardware financing for you. Still minimal cap-ex, but with greater hiring/skilling and risk. Around 90% cheaper than AWS et al (plus time).
4 - Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills, scale, cap-ex, and if you plan to run the servers for at least 3-5 years.
A good provider for option 3 is someone like Hetzner. Their internal ROI on server hardware seems to be around the 3 year mark. After which I assume it is either still running with a client, or goes into their server auction system.
Options 3 & 4 generally become more appealing either at scale, or when infrastructure is part of the core business. Option 1 is great for startups who want to spend very little initially, but then grow very quickly. Option 2 is pretty good for SMEs with baseline load, regular-sized business growth, and maybe an overworked DevOps team!
[0] https://lithus.eu, adam@
torginus|26 days ago
A core at this are all the 'managed' services - if you have a server box, its in your financial interest to squeeze as much per out of it as possible. If you're using something like ECS or serverless, AWS gains nothing by optimizing the servers to make your code run faster - their hard work results in less billed infrastructure hours.
This 'microservices' push usually means that instead of having an on-server session where you can serve stuff from a temporary cache, all the data that persists between requests needs to be stored in a db somewhere, all the auth logic needs to re-check your credentials, and something needs to direct the traffic and load balance these endpoint, and all this stuff costs money.
I think if you have 4 Java boxes as servers with a redundant DB with read replicas on EC2, your infra is so efficient and cheap that even paying 4x for it rather than going for colocation is well worth it because of the QoL and QoS.
These crazy AWS bills usually come from using every service under the sun.
bojangleslover|26 days ago
1) Senior engineer starts on AWS
2) Senior engineer leaves because our industry does not value longevity or loyalty at all whatsoever (not saying it should, just observing that it doesn't)
3) New engineer comes in and panics
4) Ends up using a "managed service" to relieve the panic
5) New engineer leaves
6) Second new engineer comes in and not only panics but outright needs help
7) Paired with some "certified AWS partner" who claims to help "reduce cost" but who actually gets a kickback from the extra spend they induce (usually 10% if I'm not mistaken)
Calling it it ransomware is obviously hyperbolic but there are definitely some parallels one could draw
On top of it all, AWS pricing is about to massively go up due to the RAM price increase. There's no way it can't since AWS is over half of Amazon's profit while only around 15% of its revenue.
mrweasel|26 days ago
coredog64|26 days ago
If ECS is faster, then you're more satisfied with AWS and less likely to migrate. You're also open to additional services that might bring up the spend (e.g. ECS Container Insights or X-Ray)
Source: Former Amazon employee
parentheses|26 days ago
Microservices is a killer with cost. For each microservices pod - you're often running a bunch of side cars - datadog, auth, ingress - you pay massive workload separation overhead with orchestration, management, monitoring and ofc complexity
I am just flabbergasted that this is how we operate as a norm in our industry.
lumost|26 days ago
My biggest gripe with this is async tasks where the app does numerous hijinks to avoid a 10 minute lambda processing timeout. Rather than structure the process to process many independent and small batches, or simply using a modest container to do the job in a single shot - a myriad of intermediate steps are introduced to write data to dynamo/s3/kinesis + sqs/and coordination.
A dynamically provisioned, serverless container with 24 cores and 64 GB of memory can happily process GBs of data transformations.
re-thc|26 days ago
You don’t need colocation to save 4x though. Bandwidth pricing is 10x. EC2 is 2-4x especially outside US. EBS for its iops is just bad.
jdmichal|26 days ago
If you can keep 4 "Java boxes" fed with work 80%+ of the time, then sure EC2 is a good fit.
We do a lot of batch processing and save money over having EC2 boxes always on. Sure we could probably pinch some more pennies if we managed the EC2 box uptime and figured out mechanisms for load balancing the batches... But that's engineering time we just don't really care to spend when ECS nets us most of the savings advantage and is simple to reason about and use.
BobbyTables2|25 days ago
It runs slower than a bloated pig, especially on a shared hosting node, so now needs Kubernetes and cloud orchestration to make it “scalable” - beyond a few requests per second.
nthdesign|26 days ago
bojangleslover|26 days ago
[0] https://carolinacloud.io, derek@
spwa4|26 days ago
So in practice cloud has become the more expensive option the second your spend goes over the price of 1 engineer.
Lucasoato|26 days ago
mrweasel|26 days ago
I see it from the other direction, when if something fails, I have complete access to everything, meaning that I have a chance of fixing it. That's down to hardware even. Things get abstracted away, hidden behind APIs and data lives beyond my reach, when I run stuff in the cloud.
Security and regular mistakes are much the same in the cloud, but I then have to layer whatever complications the cloud provide comes with on top. If cost has to be much much lower if I'm going to trust a cloud provider over running something in my own data center.
adamcharnock|26 days ago
We figured, "Okay, if we can do this well, reliably, and de-risk it; then we can offer that as a service and just split the difference on the cost savings"
(plus we include engineering time proportional to cluster size, and also do the migration on our own dime as part of the de-risking)
wulfstan|26 days ago
Expect a significant exit expense, though, especially if you are shifting large volumes of S3 data. That's been our biggest expense. I've moved this to Wasabi at about 8 euros a month (vs about $70-80 a month on S3), but I've paid transit fees of about $180 - and it was more expensive because I used DataSync.
Retrospectively, I should have just DIYed the transfer, but maybe others can benefit from my error...
iso1631|26 days ago
Out of interest, how old are you? This was quite normal expectation of a technical department even 15 years ago.
baby|26 days ago
objektif|26 days ago
ibejoeb|26 days ago
The flip side is that compliance is a little more involved. Rather than, say, carve out a whole swathe of SOC-2 ops, I have to coordinate some controls. It's not a lot, and it's still a lot lighter than I used to do 10+ years ago. Just something to consider.
boplicity|26 days ago
edge17|26 days ago
mgaunard|26 days ago
There is a world of difference between renting some cabinets in an Equinix datacenter and operating your own.
adamcharnock|26 days ago
5 - Datacenter (DC) - Like 4, except also take control of the space/power/HVAC/transit/security side of the equation. Makes sense either at scale, or if you have specific needs. Specific needs could be: specific location, reliability (higher or lower than a DC), resilience (conflict planning).
There are actually some really interesting use cases here. For example, reliability: If your company is in a physical office, how strong is the need to run your internal systems in a data centre? If you run your servers in your office, then there's no connectivity reliability concerns. If the power goes out, then the power is out to your staff's computers anyway (still get a UPS though).
Or perhaps you don't need as high reliability if you're doing only batch workloads? Do you need to pay the premium for redundant network connections and power supplies?
If you want your company to still function in the event of some kind of military conflict, do you really want to rely on fibre optic lines between your office and the data center? Do you want to keep all your infrastructure in such a high-value target?
I think this is one of the more interesting areas to think about, at least for me!
weavie|26 days ago
adamcharnock|26 days ago
It sounds like they probably have revenue in the €500mm range today. And given that the bare metal cost of AWS-equivalent bills tends to be a 90% reduction, we'll say a €10mm+ bare metal cost.
So I would say a cautious and qualified "yes". But I know even for smaller deployments of tens or hundreds of servers, they'll ask you what the purpose is. If you say something like "blockchain," they're going to say, "Actually, we prefer not to have your business."
I get the strong impression that while they naturally do want business, they also aren't going to take a huge amount of risk on board themselves. Their specialism is optimising on cost, which naturally has to involve avoiding or mitigating risk. I'm sure there'd be business terms to discuss, put it that way.
geocar|26 days ago
Netflix might be spending as much as $120m (but probably a little less), and I thought they were probably Amazon's biggest customer. Does someone (single-buyer) spend more than that with AWS?
Hertzner's revenue is somewhere around $400m, so probably a little scary taking on an additional 30% revenue from a single customer, and Netflix's shareholders would probably be worried about risk relying on a vendor that is much smaller than them.
Sometimes if the companies are friendly to the idea, they could form a joint venture or maybe Netflix could just acquire Hertzner (and compete with Amazon?), but I think it unlikely Hertzner could take on Netflix-sized for nontechnical reasons.
However increasing pop capacity by 30% within 6mo is pretty realistic, so I think they'd probably be able to physically service Netflix without changing too much if management could get comfortable with the idea
DyslexicAtheist|26 days ago
> Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills
back then this type of "skill" was abundant. You could easily get sysadmin contractors who would take a drive down to the data-center (probably rented facilities in a real-estate that belonged to a bank or insurance) to exchange some disks that died for some reason. such a person was full stack in a sense that they covered backups, networking, firewalls, and knew how to source hardware.
the argument was that this was too expensive and the cloud was better. so hundreds of thousands of SME's embraced the cloud - most of them never needed Google-type of scale, but got sucked into the "recurring revenue" grift that is SaaS.
If you opposed this mentality you were basically saying "we as a company will never scale this much" which was at best "toxic" and at worst "career-ending".
The thing is these ancient skills still exist. And most orgs simply do not need AWS type of scale. European orgs would do well to revisit these basic ideas. And Hetzner or Lithus would be a much more natural (and honest) fit for these companies.
belorn|26 days ago
theodric|26 days ago
It baffles me that my career trajectory somehow managed to insulate me from ever having to deal with the cloud, while such esoteric skills as swapping a hot swap disk or racking and cabling a new blade chassis are apparently on the order of finding a COBOL developer now. Really?
I can promise you that large financial institutions still have datacenters. Many, many, many datacenters!
sanderjd|26 days ago
If you're willing to share, I'm curious who else you would describe as being in this space.
My last decade and a half or so of experience has all been in cloud services, and prior to that it was #3 or #4. What was striking to me when I went to the Lithus website was that I couldn't figure out any details without hitting a "Schedule a Call" button. This makes it difficult for me to map my experiences in using cloud services onto what Lithus offers. Can I use terraform? How does the kubernetes offering work? How does the ML/AI data pipelines work? To me, it would be nice if I could try it out in a very limited way as self-service, or at least read some technical documentation. Without that, I'm left wondering how it works. I'm sure this is a conscious decision to not do this, and for good reasons, but I thought I'd share my impressions!
adamcharnock|26 days ago
We're not really that kind of product company; we're more of a services company. What we do is deploy Kubernetes clusters onto bare metal servers. That's the core technical offering. However, everything beyond that is somewhat per-client. Some clients need a lot of compute. Some clients need a custom object storage cluster. Some clients need a lot of high-speed internal networking. Which is why we prefer to have a call to figure out specifically what your needs are. But I can also see how this isn't necessarily satisfying if you're used to just grabbing the API docs and having a look around.
What we will do is take your company's software stack and migrate it off AWS/Azure/Google and deploy it onto our new infrastructure. We will then become (or work with) your DevOps team to supporting you. This can be anything from containerising workloads to diagnosing performance issues to deploying a new multi-region Postgres cluster. Whatever you need done on your hardware that we feel we can reasonably support. We are the ones on-call should NATS fall over at 4am.
Your team also has full access to the Kubernetes cluster to deploy to as you wish.
I think the pricing page is the most concrete thing on our website, and it is entirely accurate. If you were to phone us and say, "I want that exact hardware," we would do it for you. But the real value we also offer is in the DevOps support we provide, actually doing the migration up-front (at our own cost), and being there working with your team every week.
whiplash451|26 days ago
Unfortunately, (successful) startups can quickly get trapped into this option. If they're growing fast, everyone on the board will ask why you'd move to another option at the first place. The cloud becomes a very deep local minimum that's hard to get out off.
eru|26 days ago
Is it still the cheapest after you take into account that skills, scale, cap-ex and long term lock-in also have opportunity costs?
graemep|26 days ago
You can get locked into cloud too.
The lock in is not really long term as it is an easy option to migrate off.
chrisandchris|25 days ago
LeFantome|25 days ago
The cloud is great when you just need to start and when you do not know what scale you will need. Minimal initial cost and no wasted time over planning things you do not know enough about.
The cloud is horrible for steady-state demand. You are over-paying for your base load. If your demand does not scale that much, you do not benefit from the flexibility. Distance from the edge can cause performance problems. In an effort to “save money” you will chase complexity and bake in total reliance on your cloud provider. AAA l
Starting with the cloud makes sense. Just make sure not to engineer a solution you cannot take somewhere else.
As you scale and demand becomes known, you can start to migrate some stuff on premises or to other managed providers.
The great thing about “cloud architecture” is that you can use a hybrid model. You can selectively move parts of the stack. You can host your baseline demand and still rely on the cloud for scalability.
Where you need to spend the money and gain the expertise is in design. Not a giant features waterfall but rather knowing how to build an application and infrastructure that is adaptable and portable as you scale.
Keep it simple but also keep it modular.
At least, that had been my experience.
themafia|25 days ago
The core services are cheap. S3 is cheap. Dynamo is cheap. Lambda is exceedingly cheap. Not understanding these services on their own terms and failing to read the documentation can lead one to use them in highly inefficient ways.
The "cloud" isn't just "another type of server." It's another type of /service/. Every costly stack I've seen fails to accept this truth.
preisschild|26 days ago
https://docs.hetzner.com/cloud/technical-details/faq/#what-k...
victorbjorklund|26 days ago
Schlagbohrer|26 days ago
adamcharnock|26 days ago
It works because bare metal is about 10% the cost of cloud, and our value-add is in 1) creating a resilient platform on top of that, 2) supporting it, 3) being on-call, and 4) being or supporting your DevOps team.
This starts with us providing a Kubernetes cluster which we manage, but we also take responsibility for the services run on it. If you want Postgres, Redis, Clickhouse, NATS, etc, we'll deploy it and be SLA-on-call for any issues.
If you don't want to deal with Kubernetes then you don't have to. Just have your software engineers hand us the software and we'll handle deployment.
Everything is deployed on open source tooling, you have access to all the configuration for the services we deploy. You have server root access. If you want to leave you can do.
Our customers have full root access, and our engineers (myself included) are in a Slack channel with you engineers.
And, FWIW, it doesn't have to be Hetzner. We can colocate or use other providers, but Hetzner offer excellent bang-per-buck.
Edit: And all this is included in the cluster price, which comes out cheaper than the same hardware on the major cloud providers
victorbjorklund|26 days ago
CrzyLngPwd|26 days ago
We rent hardware and also some VPS, as well as use AWS for cheap things such as S3 fronted with Cloudflare, and SES for priority emails.
We have other services we pay for, such as AI content detection, disposable email detection, a small postal email server, and more.
We're only a small business, so having predictable monthly costs is vital.
Our servers are far from maxed out, and we process ~4 million dynamic page and API requests per day.
megggan|26 days ago
bell-cot|26 days ago
Archelaos|26 days ago
doctorpangloss|26 days ago
jgalt212|26 days ago
rcpt|26 days ago
adarsh2321|26 days ago
[deleted]
bpavuk|26 days ago
plus, infra flexibility removes random constraints that e.g. Cloudflare Workers have
slyall|26 days ago
Reality is these days you just boot a basic image that runs containers
[0] Longer list here: https://github.com/alexellis/awesome-baremetal
adamcharnock|26 days ago
aequitas|26 days ago
muvlon|26 days ago
preisschild|26 days ago