(no title)
robocat | 8 days ago
It is hard for individuals to fight back against toxic obesogenic[1] advertisers.
Even when we communally create a helpful concept or word, it eventually gets undermined and debased by sociopathic advertising (now with extra helping hand from AI).
Aside 1: there's an art to narrowing down and selecting relevant information from AI generated responses. Plus the art of writing/rewriting good prompts. I fear many people will fail to learn to prompt intelligently (Many people haven't even learned to use search operators over decades!)
Irrelevant Aside 2: AI generated spelling mistakes are fun to look for in AI comments faking humanity. Humans make different mistakes than AI generates - "Chipolte" feels especially human!
[1] prompt: "What are some strong words meaning the opposite of benign (especially as relating to antisocial food advertising)". Aside 3: the response ended "isn’t just harm—it’s harm with intent or harm that multiplies. Accidental harm calls for correction. Engineered harm calls for reform.". A moralising or pandering AI feels systematically dangerous.
No comments yet.