top | item 27979682

(no title)

incadenza | 4 years ago

I don’t mean to be too critical, but this article is part of a larger trend. I can nod along with all of its central claims and walk away with exactly zero actionable advice or practical path toward integrating this into my teams workflow.

discuss

order

deurruti|4 years ago

I would agree with you and say while there is no set advice or plan to follow (that would make things all too easy) there are a couple things here that we could all apply to our teams.

> Tracking automated test coverage (unit, integration, ui) is a performative task that doesn’t provide hard evidence to increase confidence with stakeholders. Instead we should shift automated testing from functional to compliant, accessible, and security based testing and track coverage there.

> Shift testing to the left. This has been a major problem in the organizations I’ve worked at and we should continue to keep a close eye on. Establishing processes to get QA as early as possible into architecture reviews, design reviews and other early processes that tend to only be dev, product, and design focused.

> Continue to build up our embedded QA unit to be sources of insight for multiple stakeholders and provide domain knowledge for our products. As QA we should always be asking two questions are we building the correct product? And are we building it correctly?

ipnon|4 years ago

This speaks to the artfulness of testing, as a discipline of programming in general. There is no science of testing. Testing well is a skill learned over decades, as in the case of the author.

Testing is a fraught activity, too often leaving stakeholders without confidence, and leaving programmers feeling like they are just going through the motions. Yet there is clearly some value to testing; who prefers untested code to the tested?

Our lack of definitive answers regarding how to best test should not discourage us from testing. We should instead appreciate the inexactness of good testing, and seek to develop a fine sensitivity for how to test our software well.

jeffreygoesto|4 years ago

What would you expect as actionable advice? The software world is so diverse, how could a reasonably sized article cover all those needs?

The best I can think of would be: regression tests are ok and mean "don't you ever do _that_ again". But they are not sufficient to catch the funny ways your customer will use the software. For that you need some people striking a balance using it in new but realistic enough ways.

bluGill|4 years ago

Anytime someone creates actionable testing advice I automate it. However that still isn't enough to ensure quality so I need humans to find the bugs that actionable advice didn't