(no title)
aseg | 3 years ago
Here are two hand-wavy arguments that may not be 100% correct:
* Structured Bias: A symbolic regression term allows you -- the scientist -- to control exactly what sort of expressions you expect to see. If I'm looking at data coming from a spring, I expect to see a lot of dampened sinusoids and little quantum physics. SR gives you control over the "programming language" while parametric regression will only allow you to change the number of parameters (not useful in this context).
* Generality: A regression term guarantees the best fit parametric equation as long as you have a comprehensive sample of your data range. A symbolic expression (most of the time) extrapolates beyond the provided data range. In fact, this is one of the constraints in the main proof in the paper (f* should generalize)! Basically: If I only have data for sin(x) from 0 to \pi, PR will find the best fit but there is no guarantee that the best fit will also work in the range \pi to 2\pi.
I want to stress that these aren't established facts and each of these pros actually introduces a lot of cons in the process (what if you introduce incorrect structured bias / what if the "general/simple" solution is actually a little imprecise... Like Newton's laws vs Einstein's theory)! This just means that there is plenty of exciting work to be done!
danuker|3 years ago
I suspect you mean "the value of parameters" here.
rocqua|3 years ago
Most Neural networks have other hyper-paramters, but the ones in SR are probably quite interpretable and intuitive.