top | item 42337166

(no title)

jl2718 | 1 year ago

This is the bigger reality. It’s turned almost all business and academic writing into long-winded meaningless trash. Well, more than it already was I guess. It seems that the way people use it is to expand few bits of information into many bits of content to convince others that work was done. It’s like the Turing test for laziness. The other issue is that it tends toward agreement on anything it wasn’t trained to specifically disagree about. I can see a smarter and more disagreeable bot doing much worse on LMSys than the sycophant models. Nothing new there I guess. But it’s spilling over to human norms as well, in that previously normal human deviation from chat model style interactions is anomalous, so everybody has to use the AI, and therefore nobody is providing any more value than the LLM, so everybody is getting laid off, except the disagreeable guy, and he gets fired first. It’s hacking us in the positive reinforcement vulnerabilities, ones that get worse the more they’re exploited, but it has none of the human resource constraints that previously kept them in check.

discuss

order

No comments yet.