(no title)
lindydonna | 7 years ago
> I may be wrong, but from what I can tell this is really a souped-up framework for working with clouds. Most of it looks like primitive for immutable infrastructure, so Terraform but as a library. As far as can tell, there's no deployment system, no orchestration etc. -- a "Pulumi app" doesn't know how to deploy itself. Is that accurate?
Not quite. A Pulumi app is always deployed through the Pulumi CLI, which deploys and manages Pulumi programs. Note that most Pulumi code runs at deployment time, not runtime. Instead of specifying resources in a configuration language, you write them in code. The Pulumi CLI turns this code into a declarative plan, and updates your infrastructure when you run `pulumi update`. You may find this doc page helpful: https://pulumi.io/reference/how.html
Docker builds, plus provisioning of a container registry instance (and deploying to it), are handled automatically. This blog post walks through the e2e container scenario (but on AWS, rather than GCP): http://blog.pulumi.com/deploying-production-ready-containers...
> Typically applications aren't self-contained enough that they can, or should, declare all their resources. So presumably then you have to centralize common stuff (e.g. your central Postgres server) in a shared module that all your apps import. Now you get into versioning hell as your app depends on an old version of the "common" module and "pulumi update" tears down your Postgres 10 install and creates a 9.6 install instead. (Presumably it asks first. But still. Versioning has got to be a challenge here.)
This is just one way to architect an app, and it's unlikely to work well for a database, as you say. In this case, you'd likely use a `database.get` call, where you reference an existing database, that may be managed in a different Pulumi program or stack, or even outside of Pulumi.
No comments yet.