top | item 38321273

(no title)

overactor | 2 years ago

I think it's a little harsh to claim that the LLM didn't understand words. Sure it's far from perfect, but it mostly gives coherent answers. The AI is instructed to interpret each scenario as deadly, so it will typically do that, even if it doesn't make much sense.

discuss

order

user_7832|2 years ago

I’m not harsh towards the creator of the game, this is more a critique of llms in general. Asking an llm for steps for something impossible will force it to give a result that is physically absurd. It’s certainly not an issue if you’re using them as a tool, but in a game like this is gives “faulty” results.