I just mean, given that LLMs exist this isn't a surprising result. It only looks surprising because the UI makes you forget that each prompt is a completely new universe to the model.
I wouldn't call a step in a history-aware conversation a completely new universe. By that logic, every single time a token is generated is a new universe even though the token is largely dependent on the prompt, which includes custom instructions, chat history, and all tokens generated in the response so far.
fsmv|2 years ago
block_dagger|2 years ago