top | item 44774021

(no title)

rytill | 7 months ago

Why would one have motivation to not use activation functions?

To my knowledge they’re a negligible portion of the total compute during training or inference and work well to provide non-linearity.

Very open to learning more.

discuss

order

russfink|7 months ago

One reason might be expressing the constructs in a different domain, eg homomorphic encrypted evaluators.

mlnomadpy|6 months ago

they are one of the reasons neural networks are blackbox, we lose information about the data manifold the deeper we go in the network, making it impossible to trace back the output

this preprint is not coming from a standpoint of optimizing the inference/compute, but from trying to create models that we can interpret in the future and control

julius|7 months ago

Less information loss -> Less params? Please correct me if I got this wrong. The Intro claims:

"The dot product itself is a geometrically impoverished measure, primarily capturing alignment while conflating magnitude with direction and often obscuring more complex structural and spatial relationships [10, 11, 4, 61, 17]. Furthermore, the way current activation functions achieve non-linearity can exacerbate this issue. For instance, ReLU (f (x) = max(0, x)) maps all negative pre-activations, which can signify a spectrum of relationships from weak dissimilarity to strong anti-alignment, to a single zero output. This thresholding, while promoting sparsity, means the network treats diverse inputs as uniformly orthogonal or linearly independent for onward signal propagation. Such a coarse-graining of geometric relationships leads to a tangible loss of information regarding the degree and nature of anti-alignment or other neg- ative linear dependencies. This information loss, coupled with the inherent limitations of the dot product, highlights a fundamental challenge."

mlnomadpy|6 months ago

yes, since you can learn to represent the same problem with less amount of params, however most of the architectures are optimized for the linear product, so we gotta figure out a new architecture for it