(no title)
tonygrue | 5 years ago
Those managers were eventually all let go and one of the first things new management did was buy us the most powerful machines they could find and institute performance regression tests that rachetted down. Each change could only keep the existing performance or make it faster. And changes that made it faster set the new watermark. Regressions required management approval.
After performance having been unimproved for a year, it was largely resolved in 6 months.
Since then I’ve been a believer that you give your developers nearly the fastest platform money can buy but then have automated, gatekeeping tests for your lowest target platform, with spot checking to make sure your tests reflect actual user experienced performance on that platform.
trhway|5 years ago
at our BigCo these days we have a modern, dare i say Cloud Native, version of this stupidity - developers can't get even minimally powerful and reliably provisioned cloud resources (while the official rationale is similar to what you described the real one is much worse - it is just penny pinching). So it isn't just slow and unreliable, the real humongous cherry on top is that it has very short lifetime enforced by automatic deprovisioning - thus even when you finally were able to spin up our platform - which is far from guaranteed and may take many attempts as the platform is the product what we're developing, and it naturally has issues, etc. - and now you're happy and ready to start working on it, and at that moment it all disappears because of that short lifetime enforcing automated deprovisioning. Thus not surprisingly the results are exactly as you described - despite the management running around chanting cloud!cloud!cloud! our products' cloud story has so far made it practically nowhere .