top | item 38339097

(no title)

andrewprock | 2 years ago

You should be very skeptical of anyone that claims they have 100% test coverage.

Only under very rare circumstances is 100% test coverage is even possible, let alone done. Typically when people say coverage they mean "code line coverage", as opposed to the more useful "code path coverage". Since it's combinatorially expensive to enumerate all possible code paths, you rarely see 100% code path coverage in a production system. You might see it for testing vary narrow ADTs, for example; booleans or floats. But you'll almost never see it for black boxes which take more than one simply defined input doing cheap work.

discuss

order

TeMPOraL|2 years ago

I think the point isn't about 100% coverage - that's obviously a lie, because if even one line in a million line project isn't covered, you don't have 100% coverage. I think claiming to have >50% code coverage is already suspicious. Unless you're writing life-critical code or have some amazing test automation technology I've never heard about, I don't buy it.

j1elo|2 years ago

Agree with both. Recently, youtube channel ThePrimeagen was talking about this, and coincidentally put up a very silly but clarifying example, luckily he also posted to Twitter so here it is just for fun [1]:

  function foo(num: number): number {
    const a = [1];
    let sum = 0;
    for (let i = 0; i < num; ++i) {
      sum += a[i];
    }
    return sum;
  }

  test("foo", () => {
    expect(foo(1)).toEqual(1);
  });

  100% test coverage
  100% still bugged af
[1]: https://nitter.net/ThePrimeagen/status/1639250735505235975

agumonkey|2 years ago

I've been asking a few people about what range is good, and a lot say 90 is great, 70 is ideal to balance maintenance cost.

The answers vary by a lot it seems