Trilinear requires an image pyramid. Without downsampling to create that image pyramid, you can't even do trilinear sampling, so your argument strikes me as odd and circular. Like telling developers of APIs such as ID3D11DeviceContext.GenerateMips to simply use ID3D11DeviceContext.GenerateMips instead of developing ID3D11DeviceContext.GenerateMips. Also, I never took this article to be about 3D rendering and utilizing mip maps for trilinear interpolation. More about 2D image scaling.Have you never downscaled and upscaled images in a non-3D-rendering context?
Const-me|1 year ago
Indeed, and I found that leveraging hardware texture samplers is the best approach even for command-like tools which don’t render anything.
Simple CPU running C++ is just too slow for large images.
Apart from a few easy cases like 2x2 downsampling discussed in the article, SIMD optimized CPU implementations are very complicated for non-integer or non-uniform scaling factors, as they often require dynamic dispatch to run on older computers without AVX2. And despite the SIMD, still couple orders of magnitude slower compared to GPU hardware while only delivering barely observable quality win.
mschuetz|1 year ago
It has worse quality than something like a Lanczos filter, and it requires computing image pyramids first, i.e., it is also slower for the very common use case of rescaling images just once. And that article isn't really about projected/distorted textures, where trilinear filtering actually makes sense.