top | item 45466155

(no title)

alaithea | 5 months ago

Respectfully, I think you're missing the point that this is a societal rather than an individual concern. What will the average person's response to AI be? Probably to not recognize it, let alone spurn it. The cumulative effects of your neighbors, particularly the young ones who will grow up amidst this, or the old and gullible, being led along by computers over years is the thing you need to be more concerned about.

discuss

order

kqr|5 months ago

Sure, and there are people who stuff themselves full of fast food, alcohol, and/or cigarettes. I get that those things are different in that it is possible to levy vice taxes on them, but the primary defense is and will be education.

What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.

It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".

drdaeman|5 months ago

I’m not sure education or awareness is a solution. It doesn’t hurt, of course, but I think the real issue is that we’re frequently feeling “low energy” (for my lack of a better term) so entry barriers become important and least-effort options start to win (“just picking a phone/tablet” easily wins here most of time), even if were well aware that they’re not as rewarding.

I blame all the background stress and I think it’s a more important factor.

criley2|5 months ago

When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.

Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.

alaithea|5 months ago

ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.

[1]: https://www.psychologytoday.com/us/blog/urban-survival/20250...

UtopiaPunk|5 months ago

Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.

bloppe|5 months ago

ChatQanon is coming