(no title)
just6979 | 10 days ago
I think this just show how plaigarize-y LLMs are. There has been a lot recently about how easy it is to get a model to generate entire books to 98%, and this shows how the same can be done with images. Prompt it the right way, and you can get shitty copies of anything it was trained on. Really shows how little (none?) new content is actually being created, and how much is basically just lossy compression (with really noisy decompression) of the training corpus.
rasz|8 days ago
I mean they do have original in the repo https://github.com/MicrosoftDocs/learn/blob/c266367ec0eb1f7f...