top | item 42553521

(no title)

lbriner | 1 year ago

Like others, I think this is a solution describing an idealised problem but it very quickly breaks down.

Firstly, if we could accurately know the dependencies that potentially affected a top-level test, we are not like to have a problem in the first place. Our code base is not particularly complex and is probably around 15 libraries and a web app + api in a single solution. A change to something in a library potentially affects about 50 places (but might not affect any of these) and most of the time there is no direct/easy visibility of what calls what to call what to call what. There is also no correlation between folders and top-level tests. Most code is shared, how would that work?

Secondly, we use some front-end code (like many on HN), where a simple change could break every single other front-end page. Might be bad architecture but that is what it is and so any front-end change would need to run every UI change. The breaks might be subtle like a specific button now disappears behind a sidebar. Not noticeable on the other pages but will definitely break a test.

Thirdly, you have to run all of your tests before deploying to production anyway so the fact you might get some fast feedback early on is nice but most likely you won't notice the bad stuff until the 45 minutes test suite has run at which point, you have blocked production and will have to prove that you have fixed it before waiting another 45 minutes.

Fourthly, a big problem for us (maybe 50% of the failures) are flaky tests (maybe caused by flaky code, timing issues, database state issue or just hardware problems) and running selective tests doesn't deal with this.

And lastly, we already run tests somewhat selectively - we run unit tests on branch uilds before building main, we have a number of test projects in parallel but still with less than perfect Developers, less than perfect Architecture, less than perfect CI tools and environments, I think we are just left to try and incrementally improve things by identifying parallelisation opportunities, not over-testing functionality that is not on the main paths etc.

discuss

order

No comments yet.