top | item 32359951

(no title)

tmjdev | 3 years ago

This is something I have noticed with a lot of models. I'm not sure what the technical term for it is, but when there is a repeated sequence of human input with model generation following (like a chatbot) it seems to be unable to focus. When you prod it to regain focus and come back to the topic being discussed it starts making up lies.

If you use GPT3 for a large amount of content generation the issue of focus doesn't seem to be so prevalent, but it has zero guarantee of truth.

discuss

order

throwaway675309|3 years ago

It's unable to focus because you can only feedback so much of the ongoing transcript before it exceeds the prompt input size limitations.

So having a conversation with even some of the best GPT models is going to be like having a conversation with the protagonist from Momento.

simonelnahas|3 years ago

Interestingly the source code for the frontend has Chinese comments all over.

croes|3 years ago

Did Blake Lemoine try this with LaMDA?

ASalazarMX|3 years ago

Difficult not to run into this issue, he even admitted the chat logs were not verbatim.