top | item 35593628

(no title)

TisButMe | 2 years ago

(Author here) that's what I thought originally, but then it means that LLMs never get to learn from new content - current ones stop in 2021, they don't know that Russia invades Ukraine, or that Arc is a cool browser or the API of any libraries released after their end date (which has been an issue for me for code generation using fast moving libraries). I don't think it's good enough to stop acquiring new content.

discuss

order

mdale|2 years ago

There is nothing to prevent a robust hierarchy of rules and training that impacts levels of permissions per operator intent.

OpenAi has made a lot of progress on this in a very short amount of time. Casual jailbreaking or negative role playing is already 100x more difficult then early versions via the ChatGPT chat interface.

We will see more sophisticated robust adversarial filters to untrusted content going forward.

TisButMe|2 years ago

Possibly yes - I think that's my point with predicting peak oil wrong for 50 years. Still, right now it seems every time OpenAI/someone else adds a new content filter, someone figures out a prompt escape that works.

tough|2 years ago

phind gpt4 enabled search fixes the new content bias