(no title)
matthew16550 | 5 months ago
They all seem to be names for more or less the same idea.
The first time a test runs successfully it auto captures the output as a file. This is the "approved" output and is committed with the code or saved in whatever test system you use.
The next time the test runs, it captures the new output and auto compares it with the approved output. If identical, the test passes. If different, the test fails and a human should investigate the diff.
The technique works with many types of data:
* Plain text.
* Images of UI components / rendered web pages. This can check that your code change or a new browser version do not unexpectedly change the appearance.
* Audio files created by audio processing code.
* Large text logs from code that has no other tests. This can help when refactoring, hopefully an accidental side effect will appear as an unexpected diff.
See: * https://approvaltests.com/ * https://cucumber.io/blog/podcast/approval-testing/ * https://en.wikipedia.org/wiki/Characterization_test
No comments yet.