top | item 44017927

(no title)

thethirdone | 9 months ago

I'm not sure what effect you think I want. The suggestion was just to increase the "interestingness" of the study. It seems to be like the main difference between LLM and human shown was length of response. Controlling for that variable and rerunning the experiment would help show other differences.

I do think its distinctly possible that LLMs will be much less convincing due to increased hallucinations at a low word count. I also think that may have less of an effect for dishonest suggestions. Simply stating a lie confidently is relatively effective.

I would prefer advising humans to increase length rather than restricting LLMs because of the cited effects.

discuss

order

aspenmayer|9 months ago

> I would prefer advising humans to increase length rather than restricting LLMs because of the cited effects.

I would advise the opposite to humans, as your advice is playing to the strengths of AI/LLMs and away from the strengths of humans versus AI/LLMs.

thethirdone|9 months ago

Advising the opposite to humans does not make sense. 13 words is already tiny to convince someone. The choices I was thinking were restricting LLM word count and increasing human word count. The goal is specifically to make them more comparable.

The given study does not show any strength of humans over LLMs. Both goal metrics (truthful and deceptive) are better for LLMs than humans. If you are misinterpreting my advice as general advice for people not under the study's conditions, I would want to see the results of the proposed rerun before suggesting that.

However, if length of text is legitimately convincing regardless of content, I don't know why humans should avoid using that. If LLMs end up more convincing to humans than other humans simply because humans are too prideful to make their arguments longer, that seems like the worst possible future.