top | item 47057977

(no title)

nippoo | 12 days ago

They've taken it down now and replaced with an arguably even less helpful diagram, but the original is archived: https://archive.is/twft6

discuss

order

yoz-y|12 days ago

Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…

That this was ever published shows a supreme lack of care.

quietbritishjim|12 days ago

The turn axis is great! Not only have they invented their own letter (it's not r, or n, or m, but one more than m!), it points the wrong way.

shaky-carrousel|12 days ago

And that's what they dared to show to the public. I shudder thinking about the state of their code...

duxup|12 days ago

It really is wild / telling how fundamentally AI can screw up what seems like just basics like ... an arrow.

zephen|12 days ago

Is it truly possible to make GitFlow look worse than reality?

heresie-dabord|12 days ago

This passage from the post by the original creator of the diagramme summarises our Bruh New World:

"What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?

rzmmm|12 days ago

It looks like typical "memorization" in image generation models. The author likely just prompted the image.

The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications

pjc50|12 days ago

Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.

coldpie|12 days ago

> It seems a lot of large AI models basically just copy the training data and add slight modifications

Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.

jimmaswell|12 days ago

> It seems a lot of large AI models basically just copy the training data and add slight modifications

This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.