I tried the playground and got a strange response. I asked for a regex pattern, and the model gave itself a little game-plan, then it wrote the pattern and started to write tests for it. But it never stopped writing tests. It continued to write tests of increasing size until I guess it reached a context limit and the answer was canceled. Also, for each test it wrote, it added a comment about if the test should pass or fail, but after about the 30th test, it started giving the wrong answer for those too, saying that a test should fail when actually it should pass if the pattern is correct. And after about the 120th test, the tests started to not even make sense anymore. They were just nonsense characters until the answer got cut off.The pattern it made was also wrong, but I think the first issue is more interesting.
ianbicking|7 months ago
data-ottawa|7 months ago
Mercury has a 32k context window according to the paper, which could be why it does that.
beders|7 months ago
_kidlike|7 months ago
CSSer|7 months ago
[0] https://www.lesswrong.com/posts/jbi9kxhb4iCQyWG9Y/explaining...
throwaway314155|7 months ago
fiatjaf|7 months ago