This is called a "golden master" (giving X input to the system and recording the output as a test expectation). The difference with the parent is that it is way less granular, so both have value.
This is something I've yet to see a software testing framework do. Compare the results of two different implementations separated in time. i.e. two different revisions out of source control, using one as the golden reference for the other.
bazhenov/tango does something like this for performance tests, basically to counter system behaviour you run the old and new implementation at the same time.
I maintained a fairly successful legacy C/C++ application that performed business critical functions. The core developers were long gone, and countless of changes had to gone into it since. The only way the application survived was automatically comparing outputs and intermediate data structures against "ideal" values. Hundreds of business scenarios with their own input datasets. No one really understood what each scenario or ideal value meant. So your goal was to minimize divergence, and update the test case if it was deemed acceptable.
weebull|2 years ago
unknown|2 years ago
[deleted]
CHY872|2 years ago
adregan|2 years ago
fosstrack|2 years ago