(no title)
throwaway3489 | 6 years ago
"[Three months prior to the incident] We upgraded our databases to a new minor version that introduced a subtle, undetected fault in the database’s failover system."
could have been prevented if you had stopped upgrading minor versions, i.e. froze on one specific version and not even applied security fixes, instead relying on containing it as a "known" vulnerable database?
The reason I ask is that I heard of ATM's still running windows XP or stuff like that. but if it's not networked could it be that that actually has a bigger uptime than anything you can do on windows 7 or 10?
what I mean is even though it is hilariously out of date to be using windows xp, still, by any measure it's had a billion device-days to expose its failure modes.
when you upgrade to the latest minor version of databases, don't you sacrifice the known bad for an unknown good?
excuse my ignorance on this subject.
redis_mlc|6 years ago
This is a valid question.
As a database and security expert, I carefully weigh database changes. However, developers and security zealots typically charge ahead "because compliance."
Email me if you need help with that.
Thorrez|6 years ago
But customers want new features, so Stripe does changes.
Jorsiem|6 years ago
throwaway3491|6 years ago
Well I mean they're not exactly on the Internet with an IP address and no firewall, are they? (Or they would have been compromised already.)
Whatever it is, it must be separated off as an "insecure enclave".
So that's why I'm wondering about this technique. You don't just miss out on security updates, you miss performance and architecture improvements, too, if you stop upgrading.
But can that be the path toward 100% uptime? Known bad and out of date configurations, carefully maintained in a brittle known state?