top | item 19653312

(no title)

mverwijs | 6 years ago

Nice piece. Looking forward to Part II.

What I am missing (often, in these type of articles as well as in actual production environments) is the fact that if you develop (infrastructure) code, you also need to test your (infrastructure) code. Which means you need actual infrastructure to test on.

In my case, this means network equipment, storage equipment and actual physical servers.

If you're in a cloud, this means you need a seperate account on a seperate creditcard and start from there to build up the infra that Dev and Ops can start deploying their infra-as-code on.

And this test-infrastructure is not the test environments other teams run their tests on.

If that is not available, automating your infrastructure can be dangerous at best. Since you cannot properly test your code. And your code will rot.

discuss

order

stingraycharles|6 years ago

I found that Kubernetes + minikube (or a variant of that) is a fairly straightforward way to handle this. Teams / developers can easily set up a local testing environment, product owners can QA that way, etcetera.

This of course depends on your level of lock-in with various cloud environments.

pm90|6 years ago

IaC tools often handle more than kubernetes, but agreed that k8s is a fantastic way to get reproducible behavior which is absolutely imperative for testing.

This is kinda why I love Google Cloud and don't see myself moving to another cloud provider until they match GKE. I want all developers to throw everything into GKE, and Operations manages only the VPC's, firewalls etc. Developers get complete ownership over compute (and networking within the cluster) while broader network management can still be managed by an operations team.

mverwijs|6 years ago

Yup - that works pretty well. And gives developers some insight into what is required to get stuff working.

It does assume no hardware or complex networking needs to be handled.

And there is the point of observability. When there is a proper testing ground for developers that is as-production, it enables developers to dig into and mess with logging, tracing, debugging of all sorts.

This adds value by providing developers insight into what a reliability engineer (or whatever they call sysadmins these days) needs to provide whatever service it is that the developers' code is part of.

scarface74|6 years ago

Why would you need a separate credit card? It’s easy enough to set up separate accounts under an Organization with shared billing and with rules that work across accounts.

mverwijs|6 years ago

Because I want to limit the impact of maxing out a creditcard to one environment.

And I want engineers to be able to futz about with all cloud services available, without having to worry about any negative impact on production.

And finally: What happens when $cloud_provider makes changes to the accounts interface and you want to mess around with those new features, without hitting production?

Give your future-self a break, and make sure you can futz around on any and every layer.

Another common practise is using seperate domainnames. Don't use 'dev.news.ycombinator.com'. Instead, use 'news.ycombinator.dev'. This frees you up for messing around with the API of the DNS provider. And when switching DNS provider, test whatever automation you have in place for this.

kylek|6 years ago

Part 2 is linked at the bottom :) looking forward to part _3_!