It feel a bit like this to me. That's not to say LLMs should not have detected this, but I still feel like this fits the "vibes" the question gives, and some LLMs fall into that trap. Is it actually what's happening in the neural nets? Maybe not! But I always find it interesting or at least entertaining to approach those questions that way nonetheless; especially given the pattern matching nature of LLMs.
No comments yet.