I think you're missing the context that is the article.
The candy in this case is the people who may or may not go to read your e.g. ramen recipe. The real problem, as I see it, is that over time, as LLMs absorb the information covered by that recipe, fewer people will actually look at the search results since the AI summary tells them how to make a good-enough bowl of ramen. The amount of ramen enjoyers is zero-sum. Your recipe will, of course, stay up and accessible to real people but LLMs take away impressions that could have been yours. In regards to this metaphor, they take your candy and put it in their own bowl.
horsawlarway|6 months ago
Why do you take this as a problem?
And I'm not being glib here - those are genuine questions. If the goal is to share a good ramen recipe... are you not still achieving that?
SamBam|6 months ago
It's completely disingenuous to say that everyone who creates content -- blog authors, recipe creators, book writers, artists, etc -- should just be happy feeding the global consciousness because then everyone will get a tiny diluted iota of their unattributed wisdom.
jasonvorhe|6 months ago
Same goes for other stuff that can be easily propped up with lengthy text stuffed with just the right terms to spam search indexes with.
LLMs are just readability on speed, with the downsides of drugs.
unknown|6 months ago
[deleted]