top | item 33707361

(no title)

pesenti | 3 years ago

Paper: https://www.science.org/doi/10.1126/science.ade9097

Code: https://github.com/facebookresearch/diplomacy_cicero

Site: https://ai.facebook.com/research/cicero/

Expert player vs. Cicero AI: https://www.youtube.com/watch?v=u5192bvUS7k

RFP: https://ai.facebook.com/research/request-for-proposal/toward...

The most interesting anecdote I heard from the team: "during the tournament dozens of human players never even suspected they were playing against a bot even though we played dozens of games online."

discuss

order

nopinsight|3 years ago

"Having read the paper & supplementary materials, watched narrated game & spoken to one of the human players I'm pretty concerned. The @ScienceMagazine paper centres 'human-AI cooperation' & the bot is not supposed to lie. However, videos clearly show deception/manipulation"

"Screenshots of the stab below.

The human player said: "The bot is supposed to never lie [...] I doubt this was the case here" "I was definitely caught more off guard as a result of this message; I knew the bot doesn't lie, so I thought the stab wouldn't happen." "

"I'd like the researchers involved to say quite a bit more about "A.3 Manipulation"

What are possible prevention, detection & mitigation steps?

What are the possible use cases? What are the benefits/downsides of them? Has Meta considered developing products based on this?" -- Haydn Belfield, a Cambridge University researcher who focuses on the security implications of artificial intelligence (AI).

https://twitter.com/HaydnBelfield/status/1595168102924402688

https://www.cser.ac.uk/team/haydn-belfield/

sanxiyn|3 years ago

As far as I can tell, as described in the paper, the bot in fact never lies, in this sense: there is a model that generates messages from moves, where messages should correspond to moves, and when the bot says any messages, at the time, they are generated from moves the bot truthfully intends to play.

On the other hand, the bot has no concept whatsoever of keeping its words. After saying words, it is free to change its mind about what moves to play, motivated from, for example, messages from other players.

charcircuit|3 years ago

I don't see anything in the papers that say the bot isn't supposed to lie. Lying and being deceptive is a part of the game.

bo1024|3 years ago

Hmm, I guess facebook doesn't have to go through IRB for human subject experiments, nor does Science require it, apparently.

fddr|3 years ago

Do you actually think it would be a good thing if an IRB was required for this type of thing? Sure, it's "human experimentation" but the likelihood for any serious harm is basically zero.

It goes with the zeitgeist to argue for what makes the life of big tech companies hard, but they are big enough that they can afford things like that. It's smaller companies and academics that would end up not being able to innovate as much

Go down that road and you end up with an IRB evaluation requires for an A/B test that changes the color of a button

ncr100|3 years ago

I didn't read the paper. <==

Creating an AI to lie seems like the Wrong Path.

Zuckerberg should shut this down ASAP. If that's the case.