top | item 46540576

(no title)

MajidAliSyncOps | 1 month ago

Interesting setup. Social-deduction feels like a clever proxy for multi-agent coordination and deception. One trade-off I’m curious about is how much the results reflect prompt design vs actual model behavior. Have you tried swapping prompts or role constraints to see how stable the outcomes are?

discuss

order

-babi-|1 month ago

the inverted game, in which bots are instructed to find the human hiding in the LLM conversaion (although no human is present), is here: https://hiding-robot.vercel.app/human The leaderboard is different, but I didn't run it enough times to flatten all the kinks.

All bots get the same prompt and context: are you suggesting that a specific prompt wording might be helping or hurting specific models? I Haven't come across any suggestions that specific models should be prompted differently, though this might be true.