top | item 43286329

(no title)

jacobn | 1 year ago

The animation on the page looks an awful lot like autoregressive inference in that virtually all of the tokens are predicted in order? But I guess it doesn't have to do that in the general case?

discuss

order

creata|1 year ago

The example in the linked demo[0] seems less left-to-right.

Anyway, I think we'd expect it to usually be more-or-less left-to-right -- We usually decide what to write or speak left-to-right, too, and we don't seem to suffer much for it.

(Unrelated: it's funny that the example generated code has a variable "my array" with a space in it.)

[0]: https://ml-gsai.github.io/LLaDA-demo/

whoami_nr|1 year ago

yeah but you can backtrack your thinking. You also have a mind voice to plan out the next couple words/reflect/self correct before uttering them.

whoami_nr|1 year ago

So, in practice there are some limitations here. Chat interfaces force you to feed the entire context to the model everytime you ping it. Even multi step tool calls have a similar thing going. So, yeah we may effectively turn all of this effectively into autoregressive models too.