top | item 40228022

(no title)

trwm | 1 year ago

Biases are just weights on an always on input.

There isn't much difference between weights of a linear sum and coefficients of a spline.

discuss

order

Lichtso|1 year ago

> Biases are just weights on an always on input.

Granted, however this approach does not require that constant-one input either.

> There isn't much difference between weights of a linear sum and coefficients of a function.

Yes, the trained function coefficients of this approach are the equivalent to the trained weights of MLP. Still this approach does not require the globally uniform activation function of MLP.

trwm|1 year ago

At this point this is a distinction without a difference.

The only question is if splines are more efficient than lines at describing general functions at the billion to trillion parameter count.