LLMs write this way because people write this way. Maybe not everyone, but enough for it to train the models to do it. Much of my writing reads like an LLM wrote it, but that doesn't make me an LLM.
Yes and no. LLMs take all the writing on the Internet (good and bad) and average it out. It's similar to the way generative AI images always have an identifiable, artificial "look". They've averaged out the personality and thereby erased the individuality that went into the efforts the original artists used to create them.
No, this statement is not true for anything except a base model. Benchmaxxing during RL phase is how you get the advertisement style "punchy" writing, because even though people don't usually write that way it is eye catching and people will vote for the bullet-point emdash slop. I wonder if some lab will be bold enough to do "anti rlhf", lmarena score be damned.
timmytokyo|1 day ago
lelanthran|1 day ago
I doubt it; share something you wrote prior to, say... 2024.
krackers|1 day ago