top | item 45835255

(no title)

thadt | 3 months ago

Yesterday my wife burst into my office: "You used AI to generate that (podcast) episode summary, we don't sound like that!"

In point of fact, I had not.

After the security reporting issue, the next problem on the list is "trust in other people's writing".

discuss

order

bob1029|3 months ago

I think one potential downside of using LLMs or exposing yourself to their generated content is that you may subconsciously adopt their quirks over time. Even if you aren't actively using AI for a particular task, prior exposure to their outputs could be biasing your thoughts.

This has additional layers to it as well. For example, I actively avoid using em dash or anything that resembles it right now. If I had no exposure to the drama around AI, I wouldn't even be thinking about this. I am constraining my writing simply to avoid the implication.

jerf|3 months ago

I didn't make heavy use of it, but I did sometimes use "It's not X, it's Y" or some closely related variant. I've had to strike that from my writing, because whether or not it makes anyone else cringe, it's making me cringe now. My usage doesn't even match the ones the LLMs favor, my X & Y were typically full clauses with many words rather than the LLM's use of short, punchy X & Ys... but still. Close enough. Can't write it anymore.

I'm still using bullet lists sometimes, as they have their place, and I'm hoping LLMs don't totally nuke them.

code51|3 months ago

Exactly and this is hell for programming.

You don't know whose style the LLM would pick for that particular prompt and project. You might end up with Carmack or maybe that buggy, test-failing piece of junk project on Github.

imiric|3 months ago

Isn't the alternative far more likely? These tools were trained on the way people write in certain settings, which includes a lot of curated technical articles like this one, and we're seeing that echoed in their output.

There's no "LLM style". There's "human style mimicked by LLMs". If they default to a specific style, then that's on the human user who chooses to go with it, or, likely, doesn't care. They could just as well make it output text in the style of Shakespeare or a pirate, eschew emojis and bulleted lists, etc.

If you're finding yourself influenced by LLMs—don't be. Here's why:

• It doesn't matter.

• Keep whatever style you had before LLMs.

:tada:

riskable|3 months ago

I suddenly have the urge to reply to this with a bulleted list where the bullets are emoji.

jobigoud|3 months ago

Already a big problem in art, people go on witch hunt over what they think are signs of AI use.

It's sad because people that are ok with AI art are still enjoying the human art just the same. Somehow their visceral hate of AI-art managed to ruin human art for themselves as well.

whywhywhywhy|3 months ago

This ultimately will only ever harm human artists accused of it. AI artists can just say “yeah, I did, so what” defusing the criticism.

robby_w_g|3 months ago

If there wasn't global-scale theft of art and content or if LLMs could produce something better than an inferior facsimile, I bet there would be less backlash.

But instead we had a 'non-profit' called 'Open'AI that irresponsibly unleashed this technology on the world and lied about its capabilities with no care of how it would affect the average person.

dingnuts|3 months ago

AI visual output mimicks art sufficiently that it is now more difficult to identify authenticity and humanity, which are important for the human connection audiences want from art.

AI outputs mimicking art rob audiences of the ability to appreciate art on its own in the wild without further markers of authenticity, which steals joy from a whole generation of digital artists that have grown up sharing their creativity with each other

If you lack the empathy to understand why AI art-like outputs are abhorrent, I hope someone wastes a significant portion of your near future with generated meaningless material presented to you as something that is valuable and was time consuming to make, and you gain nothing from it, so that you can understand the problem for yourself first hand.

acedTrex|3 months ago

I blogged about this fundamental demolition of trust a few months ago.

HN discussed it here https://news.ycombinator.com/item?id=44384610

The responses were a surprisingly mixed bag. What I thought was a very common sense observation had some heavy detractors in those threads.

gdulli|3 months ago

You're on a forum full of people trying to profit from this tech. In that context the pushback is obvious.

riskable|3 months ago

Exposure to AI leads to people writing like AI. Just like when you're hanging out in certain circles, you start to talk like those people. It's human nature.