top | item 41667929

(no title)

pech0rin | 1 year ago

As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

discuss

order

Jordan-117|1 year ago

It's the RLHF training to make them squeaky clean and preternaturally helpful. Pretty sure without those filters and with the right fine-tuning you could have it reliably clone any writing style.

llm_trw|1 year ago

One only need to go to the dirtier corners of the llm forums to find some _very_ interesting voices there.

To quote someone from a tor bb board: my chat history is illegal in 142 countries and carries the death penalty in 9.

bamboozled|1 year ago

But without the RLHF aren’t they less useful “products”?

infinitifall|1 year ago

Classic survivorship bias. You simply don't recognise the good ones.

Al-Khwarizmi|1 year ago

Everyone I know claims to be able to recognize AI text, but every paper I've seen where that ability is A/B tested says that humans are pretty bad at this.

carlmr|1 year ago

>Maybe it's overfitting or maybe just the way models work under the hood

It feels more like averaging or finding the median to me. The writing style is just very unobtrusive. Like the average TOEFL/GRE/SAT essay style.

Maybe that's just what most of the material looks like.

chmod775|1 year ago

These models are not trained to act like a single human in a conversation, they're trained to be every participant and their average.

Every instance of a human choosing not to engage or speak about something - because they didn't want to or are just clueless about the topic, is not part of their training data. They're only trained on active participants.

Of course they'll never seem like a singular human with limited experiences and interests.

izacus|1 year ago

The output of those AIs is akin to products and software designed for the "average" user - deep inside uncanny valley, saying nothing specifically, having no specific style, conveying no emotion and nothing to latch on to.

It's the perfect embodiment of HR/corpspeak which I think its so triggering for us (ex) corpo drones.

amelius|1 year ago

Maybe because the human brain gets tired and cannot write at the same quality level all the time, whereas an AI can.

Or maybe it's because of the corpus of data that it was trained on.

Or perhaps because AI is still bad at any kind of humor.