top | item 42645012

Why aren't we all serverless yet?

48 points| srvaroa | 1 year ago |varoa.net

133 comments

order

danieloj|1 year ago

I still find the DevEx of serverless terrible compared to the well-established monolith frameworks available to us.

The YAML config, IAM permissions, generating requests and responses, it's all so painful to get anything done.

Admittedly I speak as a software engineer primarily building CRUD apps, where frameworks have had decades of development. I can see use cases for event-driven applications where serverless may make life easier. But for CRUD, currently no chance.

giancarlostoro|1 year ago

Serverless can be useful for very specific tasks, such as processing files you upload, things that should happen in the background, but if you already have a simple monolith web app, I don't see why going serverless just to go serverless will help you.

I do see its usefulness, but its not a one size fits all tool.

sodapopcan|1 year ago

> Admittedly I speak as a software engineer primarily building CRUD apps

Ya, this is the majority of us.

dolmen|1 year ago

Or maybe we just lack frameworks that provide the same developer experience but with transparent serverless deployment?

jvanderbot|1 year ago

I find serverless to be a breeze, with zero sysadmin costs compared to setting up VPS, EC2, doing your own custom monitoring, etc. Each to their own, however.

And gateway+lambda is a near perfect "dumb crud" app, though it is not without a startup cost.

breckenedge|1 year ago

There is no good reason to build a distributed monolith. You can always think of/design your monolith as a collection of (micro-)services and get the best of both worlds.

I find FaaS best when needing to automate something completely unrelated to what goes in to serving the customer. Stuff like bots to report CWV metrics from DataDog to a Slack channel.

ebiester|1 year ago

I think that's true for smaller shops. Larger shops start building their developer experience over everything and you can make it work.

But that means you're not starting with serverless, and it's your pivot from the original monolith.

moltar|1 year ago

If you use AWS CDK the DX is amazing.

baobun|1 year ago

This misses the main factor, I think: Vendor lock-in.

There is no unification of APIs - every provider has their own bespoke abstractions typically requiring heavy integration into further vendor-specific services - moreso if you are to leverage USPs.

Testing and reproducing locally is usually a pipe-dream (or take significantly more effort than the production deploy). Migrating to a different cloud usually requires significant rewrites and sometimes rearchitecturing.

jjice|1 year ago

This the reason I tend to not use a serverless solution in most cases.

I want my code to be written and executed on my machine in a way that can at least kind of resemble the production execution environment. I want a binary that gets run and some IO access, most of the time.

If I have a VM or a "serverless"-style compute like Fargate on ECS, I can define an entry point, some environment variables, and we're off to the races in a very similar environment to my local (thank god for containers and VMs).

The _idea_ of lambda and the similar services is awesome to me, but it's just such a PITA to deal with as a developer, at least in my experience.

Vt71fcAqt7|1 year ago

Google cloud run and Azure container apps both let you run an arbitrary docker image without having to deal with custom setups. Both scale automatically so they are serverless. AWS has apprunner but it doesn't scale to zero.[0]

[0] https://github.com/aws/apprunner-roadmap/issues/9 (amusingly the issue OP posts on HN)

pjmlp|1 year ago

Example, Vercel and Nelify, both running on top of AWS, yet their serverless offering is a tiny subset of Lambda capabilites.

nilamo|1 year ago

There are a few platform abstractions. Quarkus, a Java framework, has Funqy, an extension that abstracts the differences between something like aws Lambda and Knative triggers, and feels quite easy to use.

https://quarkus.io/guides/funqy

icy|1 year ago

I’m building a serverless platform with the familiar interface of Kubernetes: https://kapycluster.com. Does this fit your expectations?

mg|1 year ago

Because most applications have 27 active users per day and a $10/month VPS can handle 100000.

skc|1 year ago

With a Sqlite database at that.

Olreich|1 year ago

This article misses the most important reason to not use Serverless: Cost. It's way more expensive to run serverless than it is to run any other format, even something like AWS Fargate is better than Lambda if you keep your Lambda running for 5% of the time.

The second one is even more important though: Time. How many of my systems are guaranteed to stop after 15 minutes or less? Web Servers wouldn't like that, anything stateful doesn't like that. Even compute-heavy tasks where you might like elastic scaling fall down if they take longer than 15 minutes sometimes, and then you've paid for those 15 minutes at a steep premium and have to restart the task.

Serverless only makes sense if I can do it for minor markup compared to owning a server and if I can set time limits that make sense. Neither of these are true in the current landscape.

ElevenLathe|1 year ago

This is why I use serverless (API Gateway + Lambda) for super low traffic stuff. If I have a cron that runs 24 times a day for 12 seconds, or a service that occasionally gets a request every few days, it makes sense not to deal with the overheard and waste of a server or container running constantly.

Trasmatta|1 year ago

For me, it's because Rails has continued to be an excellent solution in every application I've ever needed to build, whether it's a project with 1 user or 10 million, or with a dev team of 1 or 100.

Every time I try to solve a problem with anything other than Rails, I run into endless issues and headaches that would have been already solved if I just. used. Rails.

bdcravens|1 year ago

Even when you have a problem set bigger than Rails, keeping everything in the Rails world and using something like Sidekiq to manage most of the backend complexity. For many cases, reasonable polling works as well as event-driven architecture, but if you absolutely have to do that, one-off lambdas that talk to Sidekiq or other parts of your Rails-stack work well enough.

abenga|1 year ago

It sounds like you have mastered Rails and know how to solve all of its "issues and headaches".

pjc50|1 year ago

> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible

This is a good way to get nonfunctioning product. Or at least a lot of frustrating meetings.

The thing is, "serverless" still has a server in it, it's just one that you don't own or control and instead lease by short timeslices. Mostly that doesn't matter, but the costs are really there.

recursivedoubts|1 year ago

they constantly try to escape

from the complexity outside and within

by dreaming of abstractions so perfect that no one will need to be good

but the latency that is will shadow

the "simple" that pretends to be

tucnak|1 year ago

I've heard the noise of a virtual machine,

Now I'm stuck in the reality of backlash

And cashed–in chips.

isoprophlex|1 year ago

grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

seem very confusing to grug

tw04|1 year ago

Because it's more expensive for less performance with less control.

If it were 5% worse performance for 7% more cost, most people would probably not bat an eye.

When it can be 50% less performant for 200% more cost, eventually someone is going to say: sure there's overhead to owning that but I will be at a major competitive advantage if I can do it even just OK. And it turns out for most businesses doing it at the scale they need isn't all that difficult to get right.

freefaler|1 year ago

Indeed... I've run on Hetzner for 20 years with triple redundancy and VPS for batch processing/CI and some internal tasks. My costs are fixed and only on very big database alters/upgrades/migrations our service has any downtime.

I have a friend who recently made a stupid bug in his processing pipeline on AWS. He woke up on morning and saw a message from his bank that his CC was over the limit.

When we have a bug, our Nagios send us a message that responces are more than 150% of average and we do a rollback.

So it's not only the risk of vendor lock-in, but also in surprising bills and policy changes, updates and other 3-rd party risks you end up with.

hylaride|1 year ago

This highly depends on workload. We migrated a service that generates terrabytes of content to send to customers each day. We moved the content generation from J2EE to java lambas and our costs went from $6K/month (on savings plans, evemn) of ec2 to ~$400/month in lambda, sqs, and elasticache/redis costs and the work was done in 1/8th the time. Mind you, our content is highly bursty where we need to be able to generate the content within seconds seconds of initiation.

Serverless also means a lot of things. We also serve static content from an S3 bucket and cloudfront. Nothing else to manage once its setup.

The flip side of serverless is you really do need to think of state yourself. The J2EE code was rock solid in reliability, including recovering from almost every kind of issue you can imagine over a decade (database, connectivity, software crashes).

dgfitz|1 year ago

> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible. Serverless represents just about the ultimate abstraction for this mindset.

I think the answer is in the first sentence. A lot of engineers make products that don't touch the internet. This concept is lost in the noise quite a bit.

badlibrarian|1 year ago

Because we use Nix recipes to deploy our Datalog-ish backend connectors that talk to Amazon Elastic Beanstalk via a bespoke database we wrote in Julia that's deployed on Snowflake Container Cloud. But there's a missing backslash somewhere and nobody can find it because even ChatGPT cannot decipher the error messages.

Maybe it's an expired certificate but the guy who knew how that stuff works built a 12,000 line shell script that uses awk, perl, and a cert library that for some reason requires both CMake and Autotools. It also requires GCC 4.6.4 because nobody can figure out how to turn off warnings are errors.

RickHull|1 year ago

The problem with astronaut architecture is that nobody tells you about (or has a handle on) all the space junk.

pluc|1 year ago

Because serverless doesn't exist. Serverless just means it runs on someone else's servers, just like the cloud. And 10 years down the road people have forgotten how to run basic things, but Bezos buys Panama.

bdcravens|1 year ago

Wireless routers also have wires.

9rx|1 year ago

In software, a server refers to an application that listens for requests from a client.

If you remember the olden days of web development, when CGI was king, the web applications didn't listen. Instead, a separate web server (e.g. Apache) called upon the application as a subprocess and communicated with it using system primitives like environment variables, stdin, and stdout.

Over time, we started moving away from the CGI model, moving the server process into the application itself. While often a fronting web server (e.g. nginx) would proxy the requests to the application, technically the application was able to stand on its own.

Serverless returns to the old CGI model, although not necessarily using the CGI protocol anymore, removing the server from the application. The application is less a server, hence the name.

jmclnx|1 year ago

It is too bad plan9 did not take off, from what I read that system was designed for a serverless environment. You can use resources like memory, disk, cpu cycles from many other plan9 systems at the sametime.

Of course I think that would be a DRM nightmare for big-corps. One could stream items another person's system owns for "free" without dealing with companies.

_heimdall|1 year ago

I agree the name isn't the best, but serverless as a hosting model has used the same definition for many years now.

They aren't your servers and the server processes running your code are only active temporarily, usually with auto-scaling features.

crzylune|1 year ago

Serverless doesn’t mean no-server. It means someone else’s server. Their system. Their rules. Their way or the highway. No thank you.

kasey_junk|1 year ago

How much do your data centers cost to build roughly? How do you get global bandwidth with out peering?

frereubu|1 year ago

I hate the term "serverless". It's a misnomer to the extent that it feels like it was designed to deliberately mislead. Even vague consultant-speak like "externally provisioned infrastructure" would feel more accurate.

fabian2k|1 year ago

Simple monoliths are much easier to reason about and debug. And the costs are much easier to estimate.

Serverless functions are quite interesting for certain use cases, but those are mostly additions to the main application. I'd hesitate to build a typical web application with mostly CRUD around serverless, it's just more complexity I don't need. But for handling jobs that are potentially resource intensive or that come in bursts something like Lambda would be a good fit.

qaq|1 year ago

How about cost at scale? Amazon itself shifted Prime Video from serverless to mostly containers and it resulted in huge savings.

dijit|1 year ago

From what I recall about that situation, they had a really stupid architecture that was using S3 as intermediate storage and processing video multiple times on multiple stages.

In fact, the solution still used serverless afaik: https://www.youtube.com/watch?v=BcMm0aaqnnI

(take that u/UltraSane! https://news.ycombinator.com/item?id=42506205)

It likely could have been solved by serverless too, by using local storage and having the pipeline condensed into a single action...

FD: I'm not a fan of serverless for production anything.

sitkack|1 year ago

Without a link and a breakdown that makes no sense. We switched from blue to square and saw a honey suckle savings.

Amazon runs both and serverless is a billing model. Many serverless runtimes consume containers.

Serverless, like microservices are a design philosophy.

cesarb|1 year ago

> The median product engineer should reason about applications as composites of high-level, functional Lego blocks where technical low-level details are invisible.

We don't make buildings from Lego blocks. We do use modular components on buildings (ceramic bricks, steel beams, etc), but they are cemented or soldered together into a monolithic whole.

In my opinion, "serverless" (which, as others have noted, is an horrible misnomer since the server still exists; true "serverless" software would run code exclusively on the client, like desktop software of old) suffers from the same issue as "remote procedure call" style distributed software from back when that was the fashion: introducing the network in place of a simple synchronous in-process call also introduces several extra failure modes.

callamdelaney|1 year ago

It's not appropriate for high compute / long running workloads. Eg video transcoding. It's more expensive. Potentially higher latency.

I worked for a company once whose entire product was built on hundreds of lambdas, it was a nightmare.

darthvervet2|1 year ago

Because tools like lambda are expensive Because it locks us into a cloud provider Because the architectures tend towards function explosion. Think CommonFuntions.java but all the calls are on the network. What could have been 2 containers and rabbitmq has become 50 lambdas and 51 sqs topics Because distributed observability is hard The ESB people became serverless function people and they brought their craziness with them. Im busy cleaning up what should be a fairly simple application but instead it has 300 lambdas. All that said, serverless managed services like databases are useful.

Saris|1 year ago

For what I need it sounds overly complex and expensive, when a $5/mo VPS works just fine.

Any time AWS is mentioned I know it's going to be some huge expensive setup.

zelon88|1 year ago

"Look at how streamlined our organization is. We have no infrastructure to manage!" -IT Director with 206 different contract renewal dates.

moi2388|1 year ago

You pay per compute, and thus you have unpredictable costs. People and businesses don’t like unpredictable things, we tend to avoid it.

locustmostest|1 year ago

We're all-in on serverless / cloud-native for our platform (document management); it works really well for our model, as we deploy into the customer's AWS account.

The initial development learning curve was higher, but the end result is a system that runs with high reliability in customer clouds that doesn't require customers (or us) to manage servers. There are also benefits for data sovereignty and compliance from running in the customer's cloud account.

But another upside to serverless is the flexibility we've found when orchestrating the components. Deploying certain modules in specific configurations has been more manageable for us with this serverless / cloud-native architecture vs. past projects with EC2s and other servers.

The only downside that we see is possible vendor lock-in, but having worked across the major cloud providers, I don't think it's an impossible task to eventually offer Azure and GCP versions of our platform.

anthonyskipper|1 year ago

I built a serverless startup (GalaticFog) about 8 years ago, had to shut it down. Market never developed. There were some obvious lessons learned.

First most companies thought they needed to do containers before serverless, and frankly it took them a while to get good at that.

Second the programming model was crap. It's really hard to debug across a bunch of function calls that are completely seperate apps. It's just a lot of work, and it made you want to go monolith and containers.

Third, the spin up time was a deal killer in that most people would not let that go, and wanted something always running so there was no latency. Sure workload exist that do not require that, but they are niche, and serverless stayed niche.

wink|1 year ago

There's a huge grey area of "I want the response of a warmed up lambda" and "I don't have enough hits that it is actually warmed up" - pair with using certain language "runtimes" like the JVM and there you have it.

Vt71fcAqt7|1 year ago

>Something I’m still having trouble believing is that complex workflows are going to move to e.g. AWS Lambda rather than stateless containers orchestrated by e.g. Amazon EKS. I think 0-1 it makes sense, but operating/scaling efficiently seems hard. […]

This isn't really saying anything about serverless though. The issue here is not with serverless but that Lambda wants you to break up your server into multiple smaller functions. Google cloud run[0] let's you simply upload a Dockerfile and it will run it for you and deal with scalling (including scaling to zero).

[0] https://cloud.google.com/run

jvanderbot|1 year ago

We've looked at these tradeoffs over and over at places I work.

There's always part of the stack (at least on the kinds of problems I work on) that is CPU intense. That part makes sense to have elastic scaling.

But there's also a ton of the stack that is stateful and / or requires long buildup to start from scratch. That part will always be a server. It's just much easier.

For my own projects, I prefer lambda. It comes with zero sysadmin, costs zero to start and maintain, and can more easily scale to infinity than a backend server. It's not without costs, but most the backend services I use can easily work in lambda or a traditional server (fastapi, axum), so it is a two-way door.

synthc|1 year ago

'Serverless' has it's uses, but not for everything

- Serverless can get very expensive - DevEx is less than stellar, can't run a debugger - Vendor lock-in - You might be forced to update when they stop supporting older runtime versions

helle253|1 year ago

I understand the appeal of serverless, especially for small stuff (we have a few serverless projects at work, and I've built some hobby projects using Lambda), but ime DevEx is such a dealbreaker. Testing changes or debugging an issue? forget about it.

Without tooling to run a serverless service locally, this is always going to be a sticking point. This is fine for hobby projects where you can push to prod in order to test (which is what I've ended up doing) but if you want stronger safeguards, it's a real problem.

demarq|1 year ago

Before we all go into why lambda doesn’t work, remember that companies are happily handing many many millions of dollars to AWS each year, and will continue to do so for some time.

deivid|1 year ago

IMO, the dev workflow is significantly worse, integration testing is harder and I don't see the value on "scale to zero", when the alternative is a $5/mo VPS.

franktankbank|1 year ago

Agree with you but you're paying for the potential of a sudden burst in traffic planned or unplanned. Going to maintain 5000 servers when you may only use them for some intense period a few hours of a single day during a month. Thats the canonical serverless pitch. I'd hate to develop a new pipeline using serverless as my dev environment.

fifticon|1 year ago

The optimistic tone at the start of the article might just be a hallucinatory strawman set up. But as a probably old dog, I fail to see the allure of these technologies(*).

When I read the copy trying to peddle them, to me it sounds quite like someone saying "Heey.. PSST! Wanna borrow 5000$ in cash, I can give it to you right now! Don't worry about 'interest rates', we'll get back to that LATER".

When I build stuff out of 'serverless', I find it rather difficult to figure out what my operation costs are going to be; I usually learn later through perusing the monthly bills.

I think the main two things I have appreciated(?), is

(1) that I can publish/update functions on cloud in 1-5 seconds, whereas the older web services I also use, often take 30-120 SECONDS(not minutes, sorry) to 'flip around' for each publish.

(2) I can publish/deploy relatively small units of code with 'functions'. But again, that is not quite accurate. It's more like 'I need to include less boilerplate' with some code to deploy it.. Because to do anything relevant, I more or less need to publish the same amount of domain/business-logic code as I used to with the older technologies.

Part from that, I mostly see downsides - my 'function/serverless' code becomes very tied-to-vendor. - testing a local dev setup is either impossible or convoluted, so I usually end up doing my dev work directly against cloud instances.

I'm probably just old dog, but I much prefer a dev environment that allows me to work on my own laptop, even if the TCP/IP cable is yanked.

Oh yeah, and spit on you too, YAML :-) They found a curse to match the abomination of "coding in xml languages" of 20 years ago..

ebiester|1 year ago

They're useful in a small set of behaviors. If you have a particular job that is run infrequently but is burstable, it doesn't make sense to have a server hanging around for just that purpose.

My current employer standardized on serverless and for many things it works well enough, but from my standpoint it's just more expensive.

pjmlp|1 year ago

> Microservices made a canonical example of how easy it is to miscalibrate that bet. Since the trend started ~15y ago,....

What started was the rebranding from distributed systems.

We have had Sun RPC (The network is the computer, a slogan now owned by Cloudflare), DCE, CORBA, DCOM, RMI, Jini, .NET Remoting, SOAP, XML-RPC, JSON-RPC,....

Client-Server, N-Tier Architecture, SOA, WebServices,...

Apparently the new trend is Microservices-based, API-first, Cloud-native, and Headless with SaaS products, aka MACH.

9rx|1 year ago

> What started was the rebranding from distributed systems.

Not really. Microservices normally refers to humans and how they work together, or, perhaps, don't work together. Microservices is the same service model found in the macro economy but applied to the micro economy of a single business, which was a novel idea at least to the general public, hence the name.

Due to Conway's law, the product ends up being a distributed system more often than not, but that is only a side effect. Theoretically you could have microservices without distributed systems, and we do see some instances of services found in the macro economy that are not offered as a distributed computing products, not to mention that services even predate the network. But distributed is definitely the way most things are going.

gchamonlive|1 year ago

Shameless plug.

I work for a community project that is building a descentralized orchestration mechanism that is intended, among other things, to democratise access to serverless open compute while also being cloudless.

Take a look at the project at https://nunet.io to know more about it!

deweller|1 year ago

For anyone here struggling with AWS Amplify or AWS CDK - I recently discovered https://sst.dev/ for serverless deployment.

It doesn't solve all problems (tt isn't a CRUD framework) - but it does make the developer experience much better as compared to Amplify.

bdcravens|1 year ago

Lack of vendor-agnostic solutions, and ridiculous amounts of configuration. It explodes complexity.

SavageBeast|1 year ago

I always found the whole thing odd personally. The Venn Diagram of people who both need to run a service in the cloud AND cannot manage an EC2 instance is a seemingly small set of people. I never saw the advantage to it and its got plenty of drawbacks.

rasengan|1 year ago

I think it’s all about cost analysis. That said, there are definitely some services that are worth outsourcing, like smtp, until you get to a certain size.

Separately, when you factor in data privacy, your decision making tree will certainly change quickly.

jamesponddotco|1 year ago

Why would I care about serverless? I love managing and working with bare metal servers.

tomrod|1 year ago

Why should I be serverless? I like servers. I like containers. I like options.

hawski|1 year ago

Isn't a shared host with php serverless for all intents and purposes?

bearjaws|1 year ago

Because most companies have incompetent OPs and leadership, that Cargo Cult themselves into more tech debt.

umitkaanusta|1 year ago

to me it seems much more intuitive to think in terms of actual servers. lambda seems like chicken nuggets but i wanna eat -say- a decent rotisserie not nuggets.

devmor|1 year ago

It’s expensive. Even considering dev and ops hours.

bananapub|1 year ago

it's annoying and expensive

fghorow|1 year ago

It's too bloody expensive. QED.

mrayycombi|1 year ago

"A death star of death stars".

JohnClark1337|1 year ago

Because some of us need to have the servers that the "serverless" people use.