(no title)
aab0
|
9 years ago
The obsession with information theory here seems like a classic nail-hammer thing. The number of bits my tests convey is totally useless to think about and certainly not worth spending pages on. All I want from my tests from a code base I maintain for thousands of patches is a tiny fraction of a bit: did my latest change break an important behavior or invariant encoded in a unit test? If I only screw up once in every 100 patches, then formally, my unit tests are doing all that work to emit 0.01 bits of information (-log(99/100)), which is formally a totally irrelevant thing to know about my unit testing framework. ('Hey Joe, what have you been up to?' 'Fixing my unit testing framework - I'm up to 0.03 bits per patch!' 'I see.')
No comments yet.