top | item 34469111

(no title)

ablatt89 | 3 years ago

There's definitely testing but it's not at the level that it should be. That is, there's presubmit and postsubmit testing, but who is triaging the results? In addition, how often do we see people design tests to pass instead of design tests to fail and catch issues? I can't speak for Youtube, but some regression can be perhaps attributed to what management claims is lower productivity for new grads and mid levels, who are often assigned things like writing tests and automation (everyone in every company just wants to work on features). I think manual QA is highly underrated in general in the industry and catches so many bugs that automation just can't, and my guess is the manual QA efforts has decreased, and the discrepancy in output from seniors and juniors/mid-levels might be causing some quality issues (just my guess).

discuss

order

fidgewidge|3 years ago

Yeah totally. They never had much manual QA. I once worked at Google and part of my job was in a rotation doing server pushes. As part of the push process I'd quickly check out the one paragraph of release notes from the releng guy which would note any new user visible features, and go and play with them on a canary server.

The frequency with which new features point blank did not work at all just blew me away. The problem 100% of the time was that some dev had written a feature given a bug ticket, written some unit tests that covered the code, but never actually brought the serving stack up and played with it themselves. They thought QA meant some test with lots of mocks, not, ya know, actually testing the feature as a user would see it. It was painful to run the servers locally so they just didn't bother a lot of the time.

56friends|3 years ago

What do you mean by who is triaging the results? Pre-submit failure will block your PR from landing, post-submit (e.g. integration test suite) failure will result in an incident and a likely deploy pause/rollback.