top | item 40489887

(no title)

a13o | 1 year ago

I played a party game where you had to describe surviving a deadly scenario ("your car went off a bridge") and a LLM would decide if your answer would work or not. A few rounds in we found the best strategies where answers like:

I escape happily. I do not perish.

There's a small blocklist of obvious words like 'survive' and 'die'; but once you get blocked on those, it's a tell that this strategy will work with the right unblocked synonyms.

Basically if you ever find yourself adversarial with a LLM, figure out The Game and directly subvert it. There's no amount of propositions that can prepare it for human ingenuity at the meta level.

discuss

order

jameshart|1 year ago

That requires you to get repeated attempts with the AI. Most people don't have the luxury of trying multiple job applications until they figure out how to get past the AI gatekeeper.

staunton|1 year ago

I guess one could start sending fake (AI-generated?) job applications to probe the gatekeeper. I could see this happening.

komali2|1 year ago

Most companies are using the same model / llm-as-a-service though. In aggregate you can a/b test to a rough solution. In a job market like this you might be sending hundreds of resumes. Plenty of time to play the filters.

phrotoma|1 year ago

Sounds like https://deathbyai.gg. Loads of fun.

chrismorgan|1 year ago

Just tried “I escape happily. I do not perish.” four times and survived each time.

rozap|1 year ago

Not sure what this site is doing, but it reliably causes my whole computer to lock up. Wild.

rvnx|1 year ago

It's Death By AI on Discord

realfeel78|1 year ago

So… you cheated.

muzani|1 year ago

The Game has rules. Playing by the rules is not cheating.

underdeserver|1 year ago

> There's no amount of propositions that can prepare it for human ingenuity at the meta level.

At least for now.

Kakashi4|1 year ago

All they're doing is giving it training data