top | item 25835280

Launch HN: Seed (YC W21) – A Fully-Managed CI/CD Pipeline for Serverless

178 points| jayair | 5 years ago | reply

Hi HN, we are Jay and Frank from Seed (https://seed.run).

We've built a service that makes it easy to manage a CI/CD pipeline for serverless apps on AWS. There are no build scripts and our custom deployment infrastructure can speed up your deployments almost 100x by incrementally deploying your services and Lambda functions.

For some background, Serverless is an execution model where you send a cloud provider (AWS in this case), a piece of code (called an AWS Lambda function). The cloud provider is responsible for executing it and scaling it to respond to the traffic needs. And you are billed for the exact number of milliseconds of execution.

Back in 2016 we were really excited to discover serverless and the idea that you could just focus on your code. So we wrote a guide to show people how to build full-stack serverless applications — https://serverless-stack.com. But once we started using serverless internally, we started hitting all the operational issues that come with it.

Serverless Framework apps are typically made up of multiple services (20-40), where each service might have 10-20 Lambda functions. To deploy a service, you need to package each Lambda function (generate a zip of the source). This can take 3-5 mins. So the entire app might take over 45 mins to deploy!

To fix this, people write scripts to deploy services concurrently. But some might need to be deployed after others, or in a specific order. And if a large number of services are deployed concurrently, you tend to run into rate-limit errors (at least in the AWS case)—meaning your scripts need to handle retries. Your services might also be deployed to multiple environments in different AWS accounts, or regions. It gets complicated! Managing a CI/CD pipeline for these apps can be difficult, and the build scripts can get large and hard to maintain.

We spoke to folks in the community who were using serverless in production and found that this was a common issue, so we decided to fix it. We've built a fully-managed CI/CD pipeline specifically for Serverless Framework and CDK apps on AWS. We support deploying to multiple environments, regions, using most common git workflows. There's no need for a build script. You connect your git repo, point to the services, add your environments, and specify the order in which you want your services to be deployed. And Seed does the rest. It'll concurrently and reliably (handle any retries) deploy all your services. It'll also remove the services reliably when a branch is removed or a PR is closed.

Recently we launched incremental deploys, which can really speed up deployments. We do this by checking which services have been updated, and which of the Lambda functions in those services need to be deployed. We internally store the checksums for the Lambda function packages and concurrently do these checks. We then deploy only those Lambda functions that've been updated. We've also optimized the way the dependencies (node_modules) in your apps are cached and installed. We download and restore them asynchronously, so they are not blocking the build steps.

Since our launch in 2017, hundreds of teams rely on Seed everyday to deploy their serverless apps. Our pricing plans are based on the number of build minutes you use and we do not charge extra for the number of concurrent builds. We also have a great free tier — https://seed.run/pricing

Thank you for reading about us. We would love to hear what you think and how we can improve Seed, or serverless in general!

102 comments

order
[+] f6v|5 years ago|reply
I build my first lambda 4 years ago and it was great: no servers, no complicated tools. Just one function which I upload and it works. The amount of tooling which exists now is just daunting. At this point, is it still worth it if the technology is so complex that people are building the whole SaaS for managing it?

PS YC is still bullish on selling shovels I see.

[+] jayair|5 years ago|reply
I think that's fair. When we started back in 2016 with Lambda, it was similar to how you describe it.

Now we've got a ton of companies that just use Lambda. So you can imagine a team for 50 developers, working on 40 or so separate services, with 500 or so Lambda functions. It can be hard manage the tooling for all of this internally.

[+] davmar|5 years ago|reply
I use seed.run and it is absolutely outstanding. The UI is incredibly easy to use and I have so much more confidence in my deployments.

These guys have done an outstanding job, definitely take a look. It's an indispensable tool.

[+] jayair|5 years ago|reply
Wow thank you! Really appreciate your support!
[+] gazzini|5 years ago|reply
I loved reading serverless-stack a couple of years ago; it was really helpful & convinced me to use serverless for a side-project that’s still going (with almost no expenses!).

I’m surprised to hear how many separate lambda functions each service in your example had. I understand the need to deploy each service independently... but to have +10 deployments within each service seems crazy to me. Is there a reason each service needs so many lambdas (vs deploying the service code as a single lambda function with different branches)?

Fwiw, I found it possible to get quite far with a single monolithic lambda function that defined multiple “routes” within it, similar to how an Express server would define routes & middleware.

Anyways, thanks for writing that PDF, and good luck with Seed!

[+] erikerikson|5 years ago|reply
One problem with monolithic functions is that you must grant them a union of all the rights required by every code branch in the monolith.

Obviously this can expand the blast radius of any vulnerability and tends to encourage rougher grained privilege grants.

[+] jayair|5 years ago|reply
Thank you for the kind words about Serverless Stack. Frank and I poured ourselves into creating it. So it makes me really happy when I hear that it ended up being helpful.

On the Lambdas per service front, the express server inside a Lambda function does work. A lot of our customers (and Seed itself) have APIs that need to have lower response times. And individually packaging them using Webpack or esbuild ends being the best way to do it. So you'll split each endpoint into a Lambda.

I just think the build systems shouldn't limit the architectural choices.

[+] anfrank|5 years ago|reply
Frank here from Seed. Just wanted to add that when you have a monolithic Lambda, multiple routes would share a CloudWatch log group, metric, and share a common node in x-ray. On the flip side, the advantage of having separate Lambda functions handling each route lets you leverage other AWS services better.
[+] whalesalad|5 years ago|reply
I have achieved this with AWS Cloudformation/SAM, a template.yml and a makefile. Polyglot too, a mix of Python backend and JS backend across multiple functions.

I’m trying to think of how a service would help me here. However I do think this is a frontier-space where there is a lot of room for improvement. Looks polished though, I’ll take it for a spin on a hobby project soon.

[+] jayair|5 years ago|reply
Yeah makes sense. Adding SAM support is on our roadmap.

Looking forward to hearing your feedback when you give it a try! I should've clarified in the post, we support all the runtimes, not just Node.

[+] jack_riminton|5 years ago|reply
Looks great. For someone who's not taken the plunge into Serverless yet, how would the costs compare to the more traditional options of hosting an app? i.e. a Rails/React app on Heroku

Of course 'it depends', but roughly speaking?

[+] jayair|5 years ago|reply
Yeah it does depend. But the numbers that get touted are at around 70-80%.

But here are the caveats. If your usage patterns are 24/7 and very predictable. You can design your infrastructure to be cheaper than the Lambda.

However for most other cases, including us at Seed (we use serverless extensively). It's so much more cheaper that we wouldn't do it any other way.

If you have a hobby project, it'll be in the free tier.

Some more details here — https://serverless-stack.com/chapters/why-create-serverless-...

[+] jayair|5 years ago|reply
Oh I'll add, Seed is heavily influenced by Heroku. It's a little like Heroku but for Serverless.
[+] zackmorris|5 years ago|reply
I wish there was something like this for Docker rather than Lambda functions.

I'm new to all of it, but the security groups, route tables, internet gateways and other implementation details of AWS left me feeling overwhelmed and insecure (literally, because roles and permissions are nearly impossible for humans to reason about). AWS also suffers from the syndrome of: if you want to use some of it, you have to learn all of it.

Basically what I need is a sandbox for running Docker containers with any reasonable scale (under 100? what's big these days?). Then I just want to be able to expose incoming port 443 and one or two others for a WebSocket or an SSL port so admins can get to the database and filesystem (maybe). Why is something so conceptually trivial not offered by more hosting providers?

I researched Heroku a bit but am not really sure what I'm looking at without actually doing the steps. I'm also not entirely certain why CI/CD has been made so complicated. I mean conceptually it's:

1) Run a web hook to watch for changes at GitHub and elsewhere

2) Optionally run a bunch of unit tests and if they pass, go to step 3

3) Run a command like "docker-compose --some-option-to-make-this-happen-remotely up"

So why is a 3 step thing a 3000 step thing? Full disclose, I did the 3000 steps with Terraform and while I learned a lot from the experience, I can't say that I see the point of most of it. I would not recommend the bare-hands way on any cloud provider to anyone, ever (unless they're a big company or something).

I guess what I'm asking is, could you adapt what you've done here to work with other AWS services like ECS? It's all of the same configuration and monitoring stuff. I've already hit several bugs in ECS where you have to manually run docker prune and other commands in the EC2 instance because the lifetimes are in hours and they haven't finished the rough edges around their cleanup commands. So I've hit problems where even though I've spun down the cluster, the new one won't spin up because it says the Nginx container is still using the port. I can't tell you how infuriating it is to have to work around issues like that which ECS was supposed to handle in the first place. And I've hit similar gotchas on the other AWS services too, to the point where I'm having trouble seeing the value in what they're offering, or even understanding why a service exists in the first place, when I might have done it a different way if I was designing it.

TL;DR: if you could make deploying Docker as "easy" as Lambda, you'd quickly run out of places to store the money.

[+] jayair|5 years ago|reply
Yeah I feel your pain in regards to AWS. It was a big reason why we wrote https://serverless-stack.com.

We run some ECS clusters internally and have run into some of the issues you mentioned. We use Seed to deploy them but the speed and reliability bit that I talked about in the post mainly applies to Lambda. So Seed can do the CI/CD part but it can't really help with the issues you mentioned.

Btw, have you tried Fargate?

[+] colinchartier|5 years ago|reply
We're building something like what you describe (YC S20) - https://layerci.com - it's similar to OP but meant for standard containers instead of serverless.

TL;DR:

1. Install on GitHub https://github.com/apps/layerci/installations/new

2. Create files called 'Layerfile' to configure the pipeline

Docker Compose example for step 3: https://layerci.com/docs/examples/docker-compose

Then just point it at a docker swarm cluster or run the standard docker/ecs integration: https://docs.docker.com/cloud/ecs-integration/

[+] pongogogo|5 years ago|reply
Have you tried cloud run on GCP? It sits in the niche you're describing between a serverless platform and some managed container orchestration platform like kubernetes (GKE or EKS).
[+] nijave|5 years ago|reply
K8s on DigitalOcean might be a solution. K8s can be pretty complex but for a single tenant/single app you can probably skip some of the complexity.

Even at 100 containers you're probably going to want health checks (some load balancer integration), rolling deploys, metrics, and aggregated logging.

Amazon also added support for Docker containers to Lambda. You need to make sure your container implements the correct interface so Lambda can start it which is in their docs

[+] greyjumper|5 years ago|reply
I think you could check Moncc https://docs.moncc.io/ - you can wrap all of the above in a template (provisioning and orchestration) and run locally or on gcp/aws

you can also integrate it with github actions

[+] PaywallBuster|5 years ago|reply
tbh, didn't run into this problem yet.

Half of my project is being developed in serverless (the microservices) that add to the big monolith application.

I've basically implemented a "monorepo CI/CD" which mostly works fine for our needs. (With some limitations/bugs in Gitlab CI due to the monorepo design)

For the most part we probably don't get so many functions bundled together, thus avoiding the deployment limitations referred.

Only one serverless app is reaching any kind of limits (200 resources per Cloudformation template if I remember correctly)

https://pedrogomes.medium.com/gitlab-ci-cd-serverless-monore...

[+] jayair|5 years ago|reply
Yeah that makes sense. That's basically how Seed started. Thanks for sharing.

What we started noticing with teams that we were talking to (and our own experience) was that the build process started limiting our architecture choices. For example, we want functions packaged individually because it reduces cold starts. But because the builds take long we had to make a trade-off. And that didn't make sense to us.

[+] mavbo|5 years ago|reply
This looks great! I've been using Serverless Framework for a project and have not been too satisfied with the experience. Could you explain the integration with that framework a little more? I see the two options for services with Seed are the Serverless Framework or Serverless Stack (which I have no experience with, but looks like a compelling alternative). Is Seed just compatible with existing Serverless Framework yml configurations, or does it integrate with your Serverless Framework account somehow? I see you offer an integration with Serverless Pro, which confused me as this appeared (to me) to be a full replacement for Serverless Framework.
[+] jayair|5 years ago|reply
Yeah so if you have a Serverless Framework (the open source project) app in a git repo, you can add that to Seed. And it'll deploy it for you. To the environments you configure on Seed.

It doesn't connect to your Serverless Pro (their SaaS offering) account. Serverless Pro offers some similar features to Seed but most of our users just use Seed.

If you want to deploy using Seed, while viewing logs or metrics on Serverless Pro, you'll need to follow those docs you mentioned to create an access key (https://seed.run/docs/integrating-with-serverless-pro). We should clarify the integration in our docs to make it less confusing.

I hope that makes sense!

[+] garethmcc|5 years ago|reply
I am curious what made you unsatisfied. As a member of the Serverless team I'd love to hear the feedback so we can potentially improve the experience for you and others.
[+] abd12|5 years ago|reply
Wow! Congrats to you, Jay and Frank. I've been a fan of your work on both Seed.run & Serverless Stack for a while. Best of luck, and I'm excited to see Seed grow :)
[+] jayair|5 years ago|reply
Thank you! I really appreciate the support!
[+] mikesabbagh|5 years ago|reply
Thank you for ur service. I just registered. I previously used aws cicd tools to do this. They integrated well for my simple use case. Can i trigger a deploy daily?
[+] jayair|5 years ago|reply
Currently, there's isn't a way to do it directly on Seed. It can be triggered using a git push.

But we've got a CLI in the works, and that should let you control when you want to trigger a deploy.

[+] ShiftEnter|5 years ago|reply
Looks really neat! I will try it out for my next ~/tmp weekend project. Meanwhile, I noticed that the link to the C# project is broken on this page https://seed.run/docs/adding-dotnet-core-projects . I wanted to try propose the change but I couldn't find the repository on your GitHub.
[+] jayair|5 years ago|reply
For some reason that repo is internally set to private, I'll check and see why that is.
[+] _0o6v|5 years ago|reply
Well done, and thanks for Serverless Stack! Awesome tutorial!

I completed it and it was excellent, and a lot of fun.

The only thing I would say is that a section on public user uploads would be amazing (e.g. avatars) as the perms and CDK stuff is a bit knotty for that (I eventually figured it out but it took a bit of trial and error).

[+] jayair|5 years ago|reply
Thank you for the kind words!

That's a good point on the avatars idea. We'll need to create a version of the notes app, where there's a public aspect of it. So maybe being able to publish it.

[+] davecap1|5 years ago|reply
How does this compare to something like AWS CodePipeline with CDK (https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.ht...)?
[+] jayair|5 years ago|reply
Most of my post was about Serverless Framework but we support CDK as well (with SST https://github.com/serverless-stack/serverless-stack).

A couple of things that we do for CDK that's different from CodePipeline:

- Setting up environments is really easy, we support PR and branch based workflows out of the box.

- We automatically cache dependencies to speed up builds.

- And we internally use Lambda to deploy CDK apps, which means it's basically free on Seed (https://seed.run/docs/adding-a-cdk-app#pricing-limits)!

[+] AlphaWeaver|5 years ago|reply
It's been a while since I touched anything serverless, but it looks like Seed supports incremental deployments, which was a major pain point when I last worked with the Serverless Framework (an open source library for deploying Lambdas, one of the first ones.) Nice job team!
[+] sicromoft|5 years ago|reply
This looks great. If you added support for easy/integrated static site hosting, this would be a compelling alternative to Vercel and Netlify. Any plans for that?
[+] jayair|5 years ago|reply
While you can deploy static websites as a part of your stack on Serverless Framework and CDK; Seed isn't doing anything specific for it.

Under this scenario, the static site is hosted on the user's AWS account. Is that what you mean when you are thinking about an alternative?

We've talked about this internally, so I'm curious to hear about your use case.

[+] smg|5 years ago|reply
How does seed compare to aws cdk pipelines

https://aws.amazon.com/blogs/developer/cdk-pipelines-continu...

I know if I go the route of cdk pipelines I will need to implement my CI/CD pipeline on my own using cdk. I want to know what are the other advantages of seed.

[+] seanemmer|5 years ago|reply
Do you plan on supporting Google Cloud Functions?
[+] jayair|5 years ago|reply
It's definitely on our roadmap, a little bit further down the road.

But I'd love to connect and learn more about the specifics of Google Cloud.

[email protected]

[+] freeqaz|5 years ago|reply
Do you have any plans to open source this?

I'm thinking about lock-in -- what if you suddenly deprecated the product? Will my deploys suddenly break?

Are you planning to maintain 1:1 feature parity with Serverless/CDK long-term? Could I fall back to those deployment tools, albeit slower, worst case?

Either way, this is awesome and congrats on the launch!

[+] jayair|5 years ago|reply
Yeah we've definitely talked about open sourcing this and it is a long term goal of ours. I think if we were starting over, we would've open sourced it right from the beginning.

> Could I fall back to those deployment tools, albeit slower, worst case?

Yup, that's how we've designed Seed. We deploy it on your behalf. So if we were to go down, you could still deploy your app just as before.

[+] simoncrypta|5 years ago|reply
Thank you for making serverless easy and accessible! I really enjoy using Seed for some of my projects.
[+] jayair|5 years ago|reply
I really appreciate the kind words and support!