top | item 46412860

(no title)

z3ugma | 2 months ago

and yet:

When you ask an AI like ChatGPT a question, what is it actually doing?

Survey of 2,301 American adults (August 1-6, 2025)

- Looking up the exact answer in a database: 45%

- Predicting what words come next based on learned patterns: 28%

- Running a script full of prewritten chat responses: 21%

- Having a human in the background write an answer: 6%

Source: Searchlight Institute

most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm

discuss

order

hackyhacky|2 months ago

> most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm

Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.

[1] https://www.niehs.nih.gov/health/topics/agents/pfc

embedding-shape|2 months ago

> Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?

Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.

I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.

tptacek|2 months ago

I'm fond of pointing out that in the 1980s, people raised the same kinds of alarms about databases.

moron4hire|2 months ago

You seem to be raising this as a "just so" kind of argument and absurdum, but we have extant examples of databases and information technology enabling villainy like oppression and genocide by making correlations easier to track, making tracking more efficient, and less cost prohibitive.

lostmsu|2 months ago

None of these answers are correct btw.

moron4hire|2 months ago

This is a fallacious argument. You don't need to understand the inner workings of a thing to see examples of harm and evaluate that harm as bad. For example, you don't need to understand how electric motors differ from internal combustion engines to understand that a mishandled car can very easily kill multiple people.

akersten|2 months ago

The problem, following your analogy, is seeing the consequences from the mishandled car but blaming the electric motor, in this case.