When I simply asked the question, the model failed, as did most of the others. It's a smaller model, that I could run locally, so obviously not as powerful.
I wanted to see if a prompt would do better that pulled into the analysis 1) a suggestion to not take every question at face value, and 2) to include knowledge of the structure of riddles.
These are part of the "context" of humans, so I speculated that maybe that was something missing from the LLM's reasoning unless explictly included.
7402|5 days ago
I wanted to see if a prompt would do better that pulled into the analysis 1) a suggestion to not take every question at face value, and 2) to include knowledge of the structure of riddles.
These are part of the "context" of humans, so I speculated that maybe that was something missing from the LLM's reasoning unless explictly included.