top | item 40247231

(no title)

droidlyx | 1 year ago

Hi Noesis, I just noticed that your implementation, combined with the efficientKAN by Blealtan (https://github.com/Blealtan/efficient-kan), results in a structure very similar to Siren(MLP with Sin activations). efficientKAN first computes the common basis functions for all the edge activations and the output can be calculated with a linear combination of the basis. If the basis functions are fourier, then a KAN layer can be viewed as a linear layer with fixed weights + Sin activation + a linear layer with learnable weights, which is a special form of Siren. I think this may show some connection between KAN and MLP.

discuss

order

bionhoward|1 year ago

How could this help us understand the difference between the learned parameters and their gradients? Can the gradients become one with the parameters a la exponential function?