top | item 43561687

(no title)

trextrex | 11 months ago

Isn't that basically a recurrent neural network?

discuss

order

eigenvalue|11 months ago

I can see where you’re coming from, but not really. Unlike an RNN, the main transformer still processes sequences non-recurrently. The “sidecar” model just encodes internal activations into compressed latent states, allowing introspection and rollback without changing the underlying transformer architecture.