That's true that it quantizes (aka bins) the samples, so it isn't right for tests that need to be 100% sample-perfect, at least vertically speaking. I suppose it is a compromise between a few tradeoffs - easy readability just from looking at the code itself (you could do images, but then there's a separate file you have to keep track of, or you're looking at binary data as a float[]) vs strict correctness. The evaluation of these tradeoffs would definitely depend on what you're doing, and in my case, most of the potential bugs are going to relate to horizontal time resolution, not vertical sample depth resolution.If the precise values of these floats is important in your domain (which it very well may be), a combination of approaches would probably be good!
Would love to hear how well this approach works for you guys. Keep me updated :)
phab|4 years ago
Obviously any time you're working with floating-point sample data the precise values of floats will almost always not be bit-accurate against what your model predicts (sometimes even if that model is a previous run of the same system with the same inputs as in this case); it's about defining an acceptable deviation. I guess what I'm saying is that for audio software, a peak-peak error of 0.1 equates to a signal at -20 dBFS (ref DBFS@1.0) (which of course is quite a large amount of error for an audio signal), so perhaps using higher-resolution graphs would be a good idea.
(Has anyone made a tool to diff sixels yet? /s)
jwosty|4 years ago