Launch HN: Seed (YC W21) – A Fully-Managed CI/CD Pipeline for Serverless
We've built a service that makes it easy to manage a CI/CD pipeline for serverless apps on AWS. There are no build scripts and our custom deployment infrastructure can speed up your deployments almost 100x by incrementally deploying your services and Lambda functions.
For some background, Serverless is an execution model where you send a cloud provider (AWS in this case), a piece of code (called an AWS Lambda function). The cloud provider is responsible for executing it and scaling it to respond to the traffic needs. And you are billed for the exact number of milliseconds of execution.
Back in 2016 we were really excited to discover serverless and the idea that you could just focus on your code. So we wrote a guide to show people how to build full-stack serverless applications — https://serverless-stack.com. But once we started using serverless internally, we started hitting all the operational issues that come with it.
Serverless Framework apps are typically made up of multiple services (20-40), where each service might have 10-20 Lambda functions. To deploy a service, you need to package each Lambda function (generate a zip of the source). This can take 3-5 mins. So the entire app might take over 45 mins to deploy!
To fix this, people write scripts to deploy services concurrently. But some might need to be deployed after others, or in a specific order. And if a large number of services are deployed concurrently, you tend to run into rate-limit errors (at least in the AWS case)—meaning your scripts need to handle retries. Your services might also be deployed to multiple environments in different AWS accounts, or regions. It gets complicated! Managing a CI/CD pipeline for these apps can be difficult, and the build scripts can get large and hard to maintain.
We spoke to folks in the community who were using serverless in production and found that this was a common issue, so we decided to fix it. We've built a fully-managed CI/CD pipeline specifically for Serverless Framework and CDK apps on AWS. We support deploying to multiple environments, regions, using most common git workflows. There's no need for a build script. You connect your git repo, point to the services, add your environments, and specify the order in which you want your services to be deployed. And Seed does the rest. It'll concurrently and reliably (handle any retries) deploy all your services. It'll also remove the services reliably when a branch is removed or a PR is closed.
Recently we launched incremental deploys, which can really speed up deployments. We do this by checking which services have been updated, and which of the Lambda functions in those services need to be deployed. We internally store the checksums for the Lambda function packages and concurrently do these checks. We then deploy only those Lambda functions that've been updated. We've also optimized the way the dependencies (node_modules) in your apps are cached and installed. We download and restore them asynchronously, so they are not blocking the build steps.
Since our launch in 2017, hundreds of teams rely on Seed everyday to deploy their serverless apps. Our pricing plans are based on the number of build minutes you use and we do not charge extra for the number of concurrent builds. We also have a great free tier — https://seed.run/pricing
Thank you for reading about us. We would love to hear what you think and how we can improve Seed, or serverless in general!
[+] [-] f6v|5 years ago|reply
PS YC is still bullish on selling shovels I see.
[+] [-] jayair|5 years ago|reply
Now we've got a ton of companies that just use Lambda. So you can imagine a team for 50 developers, working on 40 or so separate services, with 500 or so Lambda functions. It can be hard manage the tooling for all of this internally.
[+] [-] davmar|5 years ago|reply
These guys have done an outstanding job, definitely take a look. It's an indispensable tool.
[+] [-] jayair|5 years ago|reply
[+] [-] gazzini|5 years ago|reply
I’m surprised to hear how many separate lambda functions each service in your example had. I understand the need to deploy each service independently... but to have +10 deployments within each service seems crazy to me. Is there a reason each service needs so many lambdas (vs deploying the service code as a single lambda function with different branches)?
Fwiw, I found it possible to get quite far with a single monolithic lambda function that defined multiple “routes” within it, similar to how an Express server would define routes & middleware.
Anyways, thanks for writing that PDF, and good luck with Seed!
[+] [-] erikerikson|5 years ago|reply
Obviously this can expand the blast radius of any vulnerability and tends to encourage rougher grained privilege grants.
[+] [-] jayair|5 years ago|reply
On the Lambdas per service front, the express server inside a Lambda function does work. A lot of our customers (and Seed itself) have APIs that need to have lower response times. And individually packaging them using Webpack or esbuild ends being the best way to do it. So you'll split each endpoint into a Lambda.
I just think the build systems shouldn't limit the architectural choices.
[+] [-] anfrank|5 years ago|reply
[+] [-] whalesalad|5 years ago|reply
I’m trying to think of how a service would help me here. However I do think this is a frontier-space where there is a lot of room for improvement. Looks polished though, I’ll take it for a spin on a hobby project soon.
[+] [-] jayair|5 years ago|reply
Looking forward to hearing your feedback when you give it a try! I should've clarified in the post, we support all the runtimes, not just Node.
[+] [-] jack_riminton|5 years ago|reply
Of course 'it depends', but roughly speaking?
[+] [-] jayair|5 years ago|reply
But here are the caveats. If your usage patterns are 24/7 and very predictable. You can design your infrastructure to be cheaper than the Lambda.
However for most other cases, including us at Seed (we use serverless extensively). It's so much more cheaper that we wouldn't do it any other way.
If you have a hobby project, it'll be in the free tier.
Some more details here — https://serverless-stack.com/chapters/why-create-serverless-...
[+] [-] jayair|5 years ago|reply
[+] [-] zackmorris|5 years ago|reply
I'm new to all of it, but the security groups, route tables, internet gateways and other implementation details of AWS left me feeling overwhelmed and insecure (literally, because roles and permissions are nearly impossible for humans to reason about). AWS also suffers from the syndrome of: if you want to use some of it, you have to learn all of it.
Basically what I need is a sandbox for running Docker containers with any reasonable scale (under 100? what's big these days?). Then I just want to be able to expose incoming port 443 and one or two others for a WebSocket or an SSL port so admins can get to the database and filesystem (maybe). Why is something so conceptually trivial not offered by more hosting providers?
I researched Heroku a bit but am not really sure what I'm looking at without actually doing the steps. I'm also not entirely certain why CI/CD has been made so complicated. I mean conceptually it's:
1) Run a web hook to watch for changes at GitHub and elsewhere
2) Optionally run a bunch of unit tests and if they pass, go to step 3
3) Run a command like "docker-compose --some-option-to-make-this-happen-remotely up"
So why is a 3 step thing a 3000 step thing? Full disclose, I did the 3000 steps with Terraform and while I learned a lot from the experience, I can't say that I see the point of most of it. I would not recommend the bare-hands way on any cloud provider to anyone, ever (unless they're a big company or something).
I guess what I'm asking is, could you adapt what you've done here to work with other AWS services like ECS? It's all of the same configuration and monitoring stuff. I've already hit several bugs in ECS where you have to manually run docker prune and other commands in the EC2 instance because the lifetimes are in hours and they haven't finished the rough edges around their cleanup commands. So I've hit problems where even though I've spun down the cluster, the new one won't spin up because it says the Nginx container is still using the port. I can't tell you how infuriating it is to have to work around issues like that which ECS was supposed to handle in the first place. And I've hit similar gotchas on the other AWS services too, to the point where I'm having trouble seeing the value in what they're offering, or even understanding why a service exists in the first place, when I might have done it a different way if I was designing it.
TL;DR: if you could make deploying Docker as "easy" as Lambda, you'd quickly run out of places to store the money.
[+] [-] jayair|5 years ago|reply
We run some ECS clusters internally and have run into some of the issues you mentioned. We use Seed to deploy them but the speed and reliability bit that I talked about in the post mainly applies to Lambda. So Seed can do the CI/CD part but it can't really help with the issues you mentioned.
Btw, have you tried Fargate?
[+] [-] colinchartier|5 years ago|reply
TL;DR:
1. Install on GitHub https://github.com/apps/layerci/installations/new
2. Create files called 'Layerfile' to configure the pipeline
Docker Compose example for step 3: https://layerci.com/docs/examples/docker-compose
Then just point it at a docker swarm cluster or run the standard docker/ecs integration: https://docs.docker.com/cloud/ecs-integration/
[+] [-] pongogogo|5 years ago|reply
[+] [-] nijave|5 years ago|reply
Even at 100 containers you're probably going to want health checks (some load balancer integration), rolling deploys, metrics, and aggregated logging.
Amazon also added support for Docker containers to Lambda. You need to make sure your container implements the correct interface so Lambda can start it which is in their docs
[+] [-] greyjumper|5 years ago|reply
you can also integrate it with github actions
[+] [-] up6w6|5 years ago|reply
[+] [-] PaywallBuster|5 years ago|reply
Half of my project is being developed in serverless (the microservices) that add to the big monolith application.
I've basically implemented a "monorepo CI/CD" which mostly works fine for our needs. (With some limitations/bugs in Gitlab CI due to the monorepo design)
For the most part we probably don't get so many functions bundled together, thus avoiding the deployment limitations referred.
Only one serverless app is reaching any kind of limits (200 resources per Cloudformation template if I remember correctly)
https://pedrogomes.medium.com/gitlab-ci-cd-serverless-monore...
[+] [-] jayair|5 years ago|reply
What we started noticing with teams that we were talking to (and our own experience) was that the build process started limiting our architecture choices. For example, we want functions packaged individually because it reduces cold starts. But because the builds take long we had to make a trade-off. And that didn't make sense to us.
[+] [-] mavbo|5 years ago|reply
[+] [-] jayair|5 years ago|reply
It doesn't connect to your Serverless Pro (their SaaS offering) account. Serverless Pro offers some similar features to Seed but most of our users just use Seed.
If you want to deploy using Seed, while viewing logs or metrics on Serverless Pro, you'll need to follow those docs you mentioned to create an access key (https://seed.run/docs/integrating-with-serverless-pro). We should clarify the integration in our docs to make it less confusing.
I hope that makes sense!
[+] [-] jayair|5 years ago|reply
https://github.com/seed-run/homepage/commit/e5fdd3fb41fedb2b...
[+] [-] garethmcc|5 years ago|reply
[+] [-] abd12|5 years ago|reply
[+] [-] jayair|5 years ago|reply
[+] [-] mikesabbagh|5 years ago|reply
[+] [-] jayair|5 years ago|reply
But we've got a CLI in the works, and that should let you control when you want to trigger a deploy.
[+] [-] ShiftEnter|5 years ago|reply
[+] [-] jayair|5 years ago|reply
https://github.com/seed-run/homepage/edit/master/_docs/addin...
[+] [-] jayair|5 years ago|reply
[+] [-] _0o6v|5 years ago|reply
I completed it and it was excellent, and a lot of fun.
The only thing I would say is that a section on public user uploads would be amazing (e.g. avatars) as the perms and CDK stuff is a bit knotty for that (I eventually figured it out but it took a bit of trial and error).
[+] [-] jayair|5 years ago|reply
That's a good point on the avatars idea. We'll need to create a version of the notes app, where there's a public aspect of it. So maybe being able to publish it.
[+] [-] davecap1|5 years ago|reply
[+] [-] jayair|5 years ago|reply
A couple of things that we do for CDK that's different from CodePipeline:
- Setting up environments is really easy, we support PR and branch based workflows out of the box.
- We automatically cache dependencies to speed up builds.
- And we internally use Lambda to deploy CDK apps, which means it's basically free on Seed (https://seed.run/docs/adding-a-cdk-app#pricing-limits)!
[+] [-] AlphaWeaver|5 years ago|reply
[+] [-] jayair|5 years ago|reply
[+] [-] jayair|5 years ago|reply
[+] [-] sicromoft|5 years ago|reply
[+] [-] jayair|5 years ago|reply
Under this scenario, the static site is hosted on the user's AWS account. Is that what you mean when you are thinking about an alternative?
We've talked about this internally, so I'm curious to hear about your use case.
[+] [-] smg|5 years ago|reply
https://aws.amazon.com/blogs/developer/cdk-pipelines-continu...
I know if I go the route of cdk pipelines I will need to implement my CI/CD pipeline on my own using cdk. I want to know what are the other advantages of seed.
[+] [-] jayair|5 years ago|reply
But the big one for CDK is that it's faster and basically free on Seed.
Feel free to get in touch if you want more details! [email protected]
[+] [-] seanemmer|5 years ago|reply
[+] [-] jayair|5 years ago|reply
But I'd love to connect and learn more about the specifics of Google Cloud.
[email protected]
[+] [-] freeqaz|5 years ago|reply
I'm thinking about lock-in -- what if you suddenly deprecated the product? Will my deploys suddenly break?
Are you planning to maintain 1:1 feature parity with Serverless/CDK long-term? Could I fall back to those deployment tools, albeit slower, worst case?
Either way, this is awesome and congrats on the launch!
[+] [-] jayair|5 years ago|reply
> Could I fall back to those deployment tools, albeit slower, worst case?
Yup, that's how we've designed Seed. We deploy it on your behalf. So if we were to go down, you could still deploy your app just as before.
[+] [-] simoncrypta|5 years ago|reply
[+] [-] jayair|5 years ago|reply