top | item 41218010

(no title)

AISnakeOil | 1 year ago

So basically the model took user data and used it as training data in realtime? Big if true.

discuss

order

Jensson|1 year ago

No, the whole point of transformer architecture is that it can do stuff like this without any extra training, an LLM can copy your writing pattern etc.

throwaway48540|1 year ago

It did the same thing ChatGPT does when it picks up your writing style and exact words/sentences after a few messages. Literally - the audio is encoded as tokens and fed to the LLM, there is no distinction between text and audio from the model's point of view.

simonw|1 year ago

This was inference, not training. Like how you can paste a few paragraphs of text into ChatGPT and ask it to write another paragraph in a similar writing style.