Show HN: Chebyshev approximation calculator
278 points| stuffmatic | 1 year ago |stuffmatic.com | reply
here's a web app I made that generates code for efficiently approximating mathematical functions. This is useful when performance matters more than perfect accuracy, for example in embedded systems.
The app uses Chebyshev expansions, which despite their theoretical depth result in suprisingly compact and readable code in practice. This code is generated for you and using it does not require any knowledge of the underlying theory.
Source code and more info: https://github.com/stuffmatic/chebyshev-calculator
[+] [-] herodotus|1 year ago|reply
[+] [-] aghilmort|1 year ago|reply
[+] [-] jaymzcampbell|1 year ago|reply
Here's a rather wonderful original document from the BBC Research Department (I had no idea that was a thing) back in 1969 going over just what makes them so great (https://downloads.bbc.co.uk/rd/pubs/reports/1969-10.pdf).
If all you've come across are Taylor approximations, these things can seem a little like magic at first.
[+] [-] stuffmatic|1 year ago|reply
[+] [-] orlp|1 year ago|reply
Note: results. The software itself is a bit of a pain to use.
[+] [-] pclmulqdq|1 year ago|reply
[+] [-] Fredkin|1 year ago|reply
To work around it, I handled the x near zero case by just forcing to 1.0.
if(Math.abs(x) > 1e-8 ){ Math.sin(x)/x } else { 1.0 }
[+] [-] Majromax|1 year ago|reply
That wouldn't exactly be a bug. The code is undoubtedly calculating the Chebyshev coefficients by evaluating the function on something like x_j = (xmin) + (xmax - xmin)/2(1 + cos(pi[0..j-1]/(j-1)). If one of those grid points happens to be exactly 0, it will try to evaluate Math.sin(0)/0, giving the NaN.
Another workaround is to have a slightly asymmetric range, such as [-3,+3.0000001]
[+] [-] beyondCritics|1 year ago|reply
[+] [-] stuffmatic|1 year ago|reply
[+] [-] richrichie|1 year ago|reply
One’s first go to method should be Chebyshev. Neural nets used as a last resort.
[+] [-] xioxox|1 year ago|reply
[+] [-] stuffmatic|1 year ago|reply
[+] [-] hwc|1 year ago|reply
[+] [-] Rayhem|1 year ago|reply
[1]: https://www.chebfun.org
[+] [-] kxyvr|1 year ago|reply
I believe they use a different algorithm now, but the basic methodology that used to be used by Chebfun can be found in the book Spectral Methods in Matlab by Trefethen. Look at chapter 6. The newer methodology with ultraspherical functions can be found in a SIAM review paper titled, "A Fast and Well-Conditioned Spectral Method," by Olver and Townsend.
[+] [-] atum47|1 year ago|reply
[+] [-] mandibles|1 year ago|reply
If you have a few spare CPU cycles, a hybrid approximation could start with a sparse lookup table of values as the initial guess for a few rounds of a numerical approximation technique. Or you just store the first few coefficients of a polynomial approximation (as in the OP's work).
[+] [-] CamperBob2|1 year ago|reply
Neural nets can be useful when you have samples of a function but no idea how to approximate it, but that's not the case here.
[+] [-] EdgeExplorer|1 year ago|reply
Training a neural net to calculate sines is like the math equivalent of using an LLM to reverse a string. Sure, you *can*, but the idea only makes sense if you don't understand how fundamentally solvable the problem is with a more direct approach.
It's always worth looking if mathematicians already have a solution to a problem before reaching for AI/ML techniques. Unfortunately, a lot of effort is probably being spent these days programming some kind of AI/ML to solve problems that have a known, efficient, maybe even proven optimal solution that developers just don't know about.
[+] [-] magicalhippo|1 year ago|reply
The main strength of a neural network comes into play when there's a lot of different inputs, not just a handful. For the simpler cases like sin(x) we have other tools like the one posted here.
[1]: https://www.youtube.com/watch?v=FBpPjjhJGhk But what is a neural network REALLY?
[+] [-] kevin_thibedeau|1 year ago|reply
[+] [-] linvs|1 year ago|reply
Math.cos(x * Math.exp(Math.cos(x * x))) is the best I got so far as it is highly composite, which leads to rapid oscillations and steep gradients that can't easily be approximated by Chebyshev.
[+] [-] ArmedSandwich|1 year ago|reply
[+] [-] hggigg|1 year ago|reply
Doesn't handle divide by zero very well though i.e. f(x)=1/x. Should probably consider that as undefined rather than a bad expression.
[+] [-] tgv|1 year ago|reply
[+] [-] Zeetah|1 year ago|reply
I'd like to generate a Chebyshev approximation for a set of X, Y sensor values. Any hints on how to modify your code to do that?
[+] [-] stuffmatic|1 year ago|reply
[+] [-] anonzzzies|1 year ago|reply
[+] [-] roger_|1 year ago|reply
Any chance you can add a rational function version?
[+] [-] mgaunard|1 year ago|reply
[+] [-] gjm11|1 year ago|reply
For many purposes it's much better to represent a polynomial as a combination of Chebyshev polynomials: 1, x, 2x^2-1, 4x^3-3x, etc.
(Supposing you are primarily interested in values of x between -1 and +1. For other finite intervals, use Chebyshev polynomials but rescale x. If x can get unboundedly large, consider whether polynomials are really the best representation for the functions you're approximating.)
Handwavy account of why: Those powers of x are uncomfortably similar to one another; if you look at, say, x^4 and x^6, they are both rather close to 0 for smallish x and shoot up towards 1 once x gets close to +-1. So if you have a function whose behaviour is substantially unlike these and represent it as a polynomial, you're going to be relying on having those powers largely "cancel one another out", which means e.g. that when you evaluate your function you'll often be representing a smallish number as a combination of much larger numbers, which means you lose a lot of precision.
For instance, the function cos(10x) has 7 extrema between x=-1 and x=+1, so you should expect it to be reasonably well approximated by a polynomial of degree not too much bigger than 8. In fact you get a kinda-tolerable approximation with degree 12, and the coefficients of the best-fitting polynomial when represented as a combination of Chebyshev polynomials are all between -1 and +1. So far, so good.
If we represent the same function as a combination of powers, the odd-numbered coefficients are zero (as are those when we use the Chebyshev basis; in both cases this is because our function is an even function -- i.e., f(-x) = f(x)), but the even-numbered ones are now approximately 0.975, -4.733, 370.605, -1085.399, 1494.822, -994.178, 259.653. So we're representing this function that takes values between -1 and +1 as a sum of terms that take values in the thousands!
(Note: this isn't actually exactly the best-fitting function; I took a cheaty shortcut to produce something similar to not quite equal to the minimax fit. Also, I make a lot of mistakes and maybe there are some above. But the overall shape of the thing is definitely as I have described.)
Since our coefficients will be stored only to some finite precision, this means that when we compute the result we will be losing several digits of accuracy.
(In this particular case that's fairly meaningless, because when I said "kinda-tolerable" I meant it; the worst-case errors are on the order of 0.03, so losing a few places of accuracy in the calculation won't make much difference. But if we use higher-degree polynomials for better accuracy and work in single-precision floating point -- as e.g. we might do if we were doing our calculations on a GPU for speed -- then the difference may really bite us.)
It also means that if we want a lower-degree approximation we'll have to compute it from scratch, whereas if we take a high-degree Chebyshev-polynomial approximation and just truncate it by throwing out the highest-order terms it usually produces a result very similar to doing the lower-degree calculation from scratch.
[+] [-] sfpotter|1 year ago|reply
[+] [-] lainga|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]