top | item 43385783

(no title)

numba888 | 11 months ago

2000-2003, both are pre-historic. We have neural networks now to do things like upscaling and colorization.

discuss

order

jandrese|11 months ago

Last time I was doing image processing in C I was doing quantization of the colorspace using the technique out of a paper from 1982. Just because a source is old doesn't mean it is wrong.

numba888|11 months ago

It doesn't mean it's wrong. Like partial derivatives are centuries old. But we are moving on. Just recently I've seen a book on image recognition. While recent it was focused on math and not a word about NN. That looked so outdated. Someone spent the whole carer on this way. Another example would be linguistic approach to text translation. 'Stupid' LLM does it way better today. I can only guess how pro linguists feel about it.

vincenthwt|11 months ago

Yes, those methods are old, but they’re explainable and much easier to debug or improve compared to the black-box nature of neural networks. They’re still useful in many cases.

earthnail|11 months ago

Only partially. The chapters on edge detection, for example, only have historic value at this point. A tiny NN can learn edges much better (which was the claim to fame of AlexNet, basically).

rahen|11 months ago

I see it the same way I see 'Applied Cryptography'. It’s old C code, but it helps you understand how things work under the hood far better than a modern black box ever could. And in the end, you become better at cryptography than you would by only reading modern, abstracted code.