top | item 44491118

(no title)

true_blue | 7 months ago

I tried the playground and got a strange response. I asked for a regex pattern, and the model gave itself a little game-plan, then it wrote the pattern and started to write tests for it. But it never stopped writing tests. It continued to write tests of increasing size until I guess it reached a context limit and the answer was canceled. Also, for each test it wrote, it added a comment about if the test should pass or fail, but after about the 30th test, it started giving the wrong answer for those too, saying that a test should fail when actually it should pass if the pattern is correct. And after about the 120th test, the tests started to not even make sense anymore. They were just nonsense characters until the answer got cut off.

The pattern it made was also wrong, but I think the first issue is more interesting.

discuss

order

ianbicking|7 months ago

FWIW, I remember regular models doing this not that long ago, sometimes getting stuck in something like an infinite loop where they keep producing output that is only a slight variation on previous output.

data-ottawa|7 months ago

if you shrink the context window on most models you'll get this type of behaviour. If you go too small you end up with basically gibberish even on modern models like Gemini 2.5.

Mercury has a 32k context window according to the paper, which could be why it does that.

beders|7 months ago

I think that's a prime example showing that token prediction simply isn't good enough for correctness. It never will be. LLMs are not designed to reason about code.

_kidlike|7 months ago

I had this happen to me on Claude Sonnet once. It started spitting out huge blocks of source code completely unrelated to my prompt, seemingly from its training data, and switching codebases once in a while... like, a few thousand lines of some C program, then switching to another JavaScript one, etc. it was insane!

throwaway314155|7 months ago

This is common amongst _all_ of the smaller LLM's.

fiatjaf|7 months ago

This is too funny to be true.