top | item 45858919

(no title)

seabass | 3 months ago

Strongly disagree. If you read enough of it the patterns in ai text are so familiar. Take this paragraph for example:

> Here’s what surprised me: the practices that made my exit smooth weren’t “exit strategies.” They were professional habits I should have built years earlier—habits that made work better even when I was staying.

“It’s not x—it’s y.”, the dashes, the q&a style text from the parent comment, and overall cadence were too hard to look past.

So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.

discuss

order

neilv|3 months ago

Regardless, people are going to start writing naturally like current LLM output, because that's a lot of what they are reading.

A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

So I tried reading some HST myself, and... some open source code documentation immediately got a little punchy.

> So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.

Good point. And if it's actually genuine original text from someone whose style was merely tainted by reading lots of "AI" slop, I guess that might be a reason to prefer reading someone who has a healthier intellectual diet.

sph|3 months ago

> A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.

That is honestly incredible and actionable advice.

Can’t wait to sprinkle a taste of the eldritch in my comments after reading some Lovecraft.

radley|3 months ago

Curious - is your concern that the post is 100% AI generated? Or do you object that AI may have been used to clean up the post?

novok|3 months ago

AI writing often leads to word inflation, so getting the original more concise one is helpful IMO. Hiding it is the annoying part, marking that you use AI to help you and having a 'source code' version I think would go over much better. If a person is deceptive and dishonest about something so obvious, how can you trust other things they say?

It also leads to slop spam content. Writing it yourself is a form of anti-spam. I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.

And because they are so low effort, it feels like putting links to a google search essentially. Higher noise, lower signal.

onraglanroad|3 months ago

Ok, well this post seems very similar style from the same author. Why isn't this ai also? https://andreacanton.dev/posts/2020-02-19-git-mantras/

seabass|3 months ago

It has a bunch of human imperfections, and I love that. The lowercase lists and inconsistent casing for similarly structured content throughout, the grammar mistakes, and overall structure. This article has a totally different feel compared to the newest ones. When you say it’s very similar, what are you picking up on? They feel like night and day from my perspective.

dang|3 months ago

LLMs got all these patterns from humans in the first place*. They're common in LLM output because they're common in human output. Therefore this argument isn't very reliable.

If P is the probability that a text containing these patterns was generated by an LLM, then yes, P > 0, but readers who are (understandably) tired of generated comments are overestimating P.

* Edit: I see now that the GP comment already said this.