top | item 37389279

(no title)

frankdejonge | 2 years ago

Most solutions on their own don't improve things a whole lot. Yet, in a system of supporting practices, it can be very powerful. The primary thing is you need a system, not just the individual parts. Testing without changing the design of your code is a horrible experience. Applying techniques like dependency inversion/injection has a positive effect on isolating behaviour which makes testing easier. Making code more deterministic makes tests easier. Pushing out side-effect from your core logic makes testing easier. All of those things add up to more than the sum of it's part, which is the an indication of dealing with a system.

discuss

order

qsort|2 years ago

I'm fine with encouraging proper tests. On a base level, TDD encourages tests, so it's overall fine, at least in principle.

My main gripe with it is the second-order concern that it encourages testing practices that are frankly not very intelligent.

How you should approach testing depends on what kind of function you are testing. Pure functions of their inputs should be tested with property-based tests. If you have

    bool isPrime(int n)
the whole rigmarole of "make a test that fails, then write the least amount of code that makes it pass" brings you to something like:

    assertFalse(isPrime(1));
    assertTrue(isPrime(2));
    assertTrue(isPrime(3));
    assertFalse(isPrime(15));
and so on, where what you really want is to say something like:

    for all 1 < i < n . n % i != 0
This obviously works less well when you have to deal with the real world, but even in that case TDD leaves you with a patchy and inflexible approach.

drewcoo|2 years ago

TDD works just fine with property-based tests, with each case representing an equivalence class. I like to randomly select from those classes because that eventually double-checks that I set my boundaries correctly. I often additionally pin the class boundaries in place.

https://en.wikipedia.org/wiki/Equivalence_partitioning

MoreQARespect|2 years ago

Dependency injection is only useful when you've managed to isolate logic/math intensive code. Some apps dont have any logic intensive code. Many others have very little.

In those cases dependency injection just increases the SLOC with the payoff that you are able a bunch of trivial unit tests that'll probably never catch a bug.

Integration tests as a default have a the best ROI in those cases.

frankdejonge|2 years ago

I look more towards optionality myself. Take for example the encapsulation of randomness. Depending on an abstract notion of randomness (an interface) that decouples you from the implementation is both useful for testing as it is for maintenance of a system. For tests, removing randomness entirely makes it deterministic, allowing you to tests for exact matches instead of approximations. For systems, at a smaller scale get away with a reduced amount of randomness, while systems at scale require more sophisticated code for this. You don't want to replace all of that code in all instances, but rather leverage the capability and replace the implementation.

Another angle is the encapsulation of storage. Using in-memory storage for tests makes Ci pipelines very quick and production storage might evolve over time to accommodate scaling requirements (sharding and such).

Tainnor|2 years ago

You don't have to inject every dependency. If it's essentially a pure function, then just call that function (despite what old-school Java advocates, free functions not tied to any object are fine).

I like dependency injection for things which have state and/or do IO and/or are expensive to construct.

taberiand|2 years ago

This isn't really true. For one, dependency injection isn't specific to testing or processing logic, it's an architectural approach used primarily for managing separation of concerns and modularisation.