top | item 47058166

(no title)

marssaxman | 11 days ago

It seems to me rather less likely that someone at Microsoft knowingly and deliberately took his specific diagram and "ran it through an AI image generator" than that someone asked an AI image generator to produce a diagram with a similar concept, and it responded with a chunk of mostly-memorized data, which the operator believed to be a novel creation. How many such diagrams were there likely to have been, in the training set? Is overfitting really so unlikely?

The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.

discuss

order

jacquesm|11 days ago

How you commit plagiarism is less important than the fact that you commit plagiarism.

marssaxman|11 days ago

What difference does that make in solving the actual problem? The real story here is not "some lousy Microsoft employee ripped off this guy's graphic", but "people using AI image generators may receive near-copies of existing media instead of new content, with no indication that this has happened".

If this has been discovered once, it must be happening every day. What can we do about that? Perhaps image generators need to build in something like a Tineye search to validate the novelty of their output before returning it.

zahlman|11 days ago

Yes, but from OP's perspective this is a distinction without a difference.

marssaxman|11 days ago

Clearly, but OP would be well advised to apply Hanlon's razor. The victimhood narrative does not improve understanding, which is necessary to work for better outcomes.