In my opinion, if you break down all the polygons in your scene into non-overlapping polygons, then clip them into pixels, calculate the color of each piece of polygon (applying all paints, blend modes, etc) and sum it up, ...in the end that's the best visual quality you can get. And that's the idea i'm working on, but it involves the decomposition/clip step on the CPU, while sum of paint/blend is done by the GPU.
dahart|1 year ago
mfabbri77|1 year ago
phkahler|1 year ago
All of this "filtering" is variations on adding blur. In fact the article extends the technique to deliberately blur images on a larger scale. When we integrate a function (which could be a color gradient over a fully filled polygon) and then paint the little square with a solid "average" color that's also a form of blurring (more like distorting in this case) the detail.
It is notable that the examples given are moving, which means moire patterns and other artifacts will have frame-to-frame effects that may be annoying visually. Simply blurring the image takes care of that at the expense of eliminating what looks like detail but may not actually be meaningful. Some of the less blurry images seem to have radial lines that bend and go back out in another location for example, so I'd call that false detail. It may actually be better to blur such detail instead of leaving it look sharper with false contours.