(no title)
sovietmudkipz | 1 month ago
I think the companies I worked for were prioritizing working on no issue deployments (built from a series of documented and undocumented manual processes!) rather than making services resilient through chaos testing. As a younger dev this priority struck me as heresy (come on guys, follow the herd!); as a more mature dev I understand time & effort are scarce resources and the daily toil tax needs to be paid to make forward progress… it’s tough living in a non-ideal world!
oooyay|1 month ago
I think that's why most companies don't do it. A lot of tedium and the main benefit was actually getting your ducks in a row.
closeparen|1 month ago
GauntletWizard|1 month ago
A significant problem with early 'Web Scale' deployments was out of date or stale configuration values. You would specify that your application connects to backend1.example.com for payments and backend2.example.com for search. A common bug in early libraries was that the connection was established once at startup, and then never again. When the backend1 service was long lived, this just worked for months or years at a time - TCP is very reliable, especially if you have sane values on keepalives and retries. Chaos Monkey helped find this class of bug. A more advanced but quite similar class of bug: You configured a DNS name, which was evaluated once at startup, and again didn't update, Your server for backend1 had a stable address for years at a time, but suddenly you needed to failover to your backup or move it to new hardware. At the time of chaos monkey, I had people fight me on this - They believed that doing a DNS lookup every five minutes for your important backends was unacceptable overhead.
The other part is - Modern deployment strategies make these old problems untenable to begin with. If you're deploying on kubernetes, you don't have an option here - Your pods are getting rebuilt with new IP addresses regularly. If you're connecting to a service IP, then that IP is explicitly a LB - It is defined as stable. These concepts are not complex, but they are edge boundaries, and we have better and more explicit contracts because we've realized the need and you "just do" deploy this way now.
Those are just Chaos Monkey problems, though - Latency Monkey is huge, but solves a much less common problem. Conformity Monkey is mostly solved by compliance tools; You don't build, you buy it. Doctor Monkey is just healthchecks - K8s (and other deployment frameworks) has those built in.
In short, Chaos Monkey isn't necessary because we've injected the chaos and learned to control most of what that was doing, and people have adopted the other tools - They're just not standalone, they're built in.
bpt3|1 month ago
If you know things will break when you start making non-deterministic configuration changes, you aren't ready for chaos engineering. Most companies never get out of this state.
closeparen|1 month ago