(no title)
the_shivers | 2 years ago
Also, I don't think the problem is necessarily AI for some of these complaints. Twitter replies are awful because you can pay for increased visibility. On a platform like Reddit where its popularity that determines visibility, the issue virtually disappears. Same issue for SEO spam websites, it's a function of Google's algorithm which incentives brainless keyword spam rather than it being an AI issue. Both of these issues predate generative AI.
The braindead children's youtube videos, for what it's worth, also predate AI.
I feel like these are just growing pains from a revolutionary new technology. Certainly the printing press can enable the spread of a lot of low quality content and misinformation, but we managed to work out the kinks.
rusty_venture|2 years ago
I think the author's point is that generative AI is a completely different animal than the printing press. The printing press could be used to spread both factual information and misinformation alike, and for various reasons factual information seems to have predominated, and high quality information is at least readily available, even if it's not the majority of printed work. Then there's the Internet, which can similarly be used to publish both information and misinformation. Perhaps due to the lower bar for entry and the speed of dissemination, the balance of information to misinformation and high-quality to low-quality content doesn't favor information or high-quality content as strongly as it does in the world of printed text, but at least the Internet always has the potential to spread factual information and high-quality content.
Then there's generative AI. Unlike the two communication technologies referenced above, AI can ONLY produce low-quality content. It is by design a statistical inference technique that generates content remixed from its training data. Without substantial human rework and rewriting, AI will always produce such low-quality drek as "it's hard to learn volleyball without a ball". And it's increasing promoted as a way to reduce human effort in writing, ensuring that people will continue to use it without supervising it's output, especially if they are trying to mass-produce content to make money. So now we have a new situation in which the majority of content produced going forwards is likely to be extremely low quality and perhaps contain substantial misinformation as well, whether intentionally or unintentionally. The author seems to posit that exposure to this type of content will negatively affect people's ability to learn to produce good, original content of their own, as they are not exposed to even passably good writing from a young age, so they cannot learn to emulate it.
dartos|2 years ago
We technically inclined people are ahead of the curve.
One people get a “feel” for what AI content looks like, they’ll be able to filter it out like the do all existing spam.
squigz|2 years ago
Stopped reading after this. This is like saying Photoshop can only produce low-quality content.
jjjjj55555|2 years ago
Using the examples of both adults and toddlers to make his point is apt. At least the toddler has no choice.