top | item 47049473

(no title)

barrkel | 12 days ago

This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.

But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.

discuss

order

svara|12 days ago

I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.

But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.

datsci_est_2015|11 days ago

> I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better.

Even poor writers write with character. My dad misspells every 4th word when he texts me, but it’s unmistakably his voice. Endearingly so.

I would push back with passion that AI writes “legitimately” better, as it has no character except the smoothed mean of all internet voices. The millennial gray of prose.

littlestymaar|12 days ago

> A lot of people aren't, and the AI legitimately writes better.

It may write “objectively better”, but the very distinct feel of all AI generated prose makes it immediately recognizable as artificial and unbearable as a result.

aaplok|12 days ago

It depends how you define "good writing", which is too often associated with "proper language", and by extension with proper breeding. It is a class marker.

People have a distinct voice when they write, including (perhaps even especially) those without formal training in writing. That this voice is grating to the eyes of a well educated reader is a feature that says as much about the reader as it does about the writer.

Funnily enough, professional writers have long recognised this, as is shown by the never-ending list of authors who tried to capture certain linguistic styles in their work, particularly in American literature.

There are situations where you may want this class marker to be erased, because being associated with a certain social class can have negative impact on your social prospects. But it remains that something is being lost in the process, and that something is the personality and identity of the writer.

Retric|12 days ago

I find most people can write way better than AI, they simply don’t put in the effort.

Which is the real issue, we’re flooding channels not designed for such low effort submissions. AI slop is just SPAM in a different context.

lich_king|12 days ago

I am really conflicted about this because yes, I think that an LLM can be an OK writing aid in utilitarian settings. It's probably not going to teach you to write better, but if the goal is just to communicate an idea, an LLM can usually help the average person express it more clearly.

But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.

It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.

baxtr|12 days ago

I think it’s essential to realize that AI is a tool for mainstream tasks like composing a standard email and not for the edges.

The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.

It’s the efficient popularization of the boring stuff. Not much else.

anon-3988|11 days ago

> The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.

I think that boring emails should not be written. What kind of boring emails do you NEED to be written, but not WANT to write? Those are exactly the kind of email that SHOULD NOT be passed through an LLM.

If you need to say yes/no. You don't want to take the whole email conversation and let LLM generate a story about why you said yes/no.

If you want to apply for a leave, just make it optimal "Hi <X>, I want to take leave from Y to Z. Thanks". You don't want to create 2 pages of justification for why you want to take this leave to see your family and friends.

In fact, for every LLM output, I want to see the input instead. What did they have in mind? If I have the input, I can ask LLM to generate 1 million outputs if I really want to read an elaboration. The input is what matters.

If I have the input, I can always generate an output. If I have the output, I don't know what was the input (i.e. the original intention).

layer8|12 days ago

It contributes to making “standard” emails boring. I rather enjoy reading emails in each sender’s original voice. People who can’t articulate well aren’t elevated, instead they are perceived to be sending bland slop if they use LLMs to conceal that they can’t express themselves well.

folbec|12 days ago

I think it is also fairly similar to the kind of discourse a manager in pretty much any domain will produce.

He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.

This maybe why AI generated content has so much success among leaders and politicians.

coke12|12 days ago

Every group want to label some outgroup as naively benefiting from AI. For programmers, apparently it's the pointy haired bosses. For normies, it's the programmers.

Be careful of this kind of thinking, it's very satisfying but doesn't help you understand the world.

devmor|12 days ago

> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.

This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.

It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.

DuperPower|12 days ago

no but its bad writing It repeats information, It adds superfluous stuff, doesnt produce more specific forms of saying things, you are making It sounds like its "too perfect" when its bland because its artificial dumbness not artificial intelligence

johnnienaked|11 days ago

Well said. In music, it's very similar. The jarring, often out of key tones are the ones that are the most memorable, the signatures that give a musical piece its uniqueness and sometimes even its emotional points. I don't think it's possible for AI to ever figure this out, because there's something about being human that is necessary to experiencing or even describing it. You cannot "algorithmize" the unspoken.

piker|12 days ago

Bryan Cantrill referred to it as "normcore" on a podcast, and that's the perfect description.

amelius|12 days ago

I'm sure this can be corrected by AI companies.

yoyohello13|12 days ago

The question is… why? What is the actual human benefit (not monetary).

q3k|12 days ago

Just let my work have a soul, please.