top | item 46399795

(no title)

futuraperdita | 2 months ago

What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".

discuss

order

intended|2 months ago

Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.

In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.

This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.

Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.

simonjgreen|2 months ago

Do you have any links you could share to content you found especially insightful about AI use in China?

mc32|2 months ago

I’m not seeing the dichotomy as much as you do.

Are they not going to build a “skynet” in China? Second, building skynet doesn’t imply eviscerating youth employment.

On the other hand, automation of menial tasks does eviscerate all kinds of employment, not only youth emoloyment.

latentsea|2 months ago

Well at least DeepMind is doing nifty things like solving the protein folding problem.

dataflow|2 months ago

One problem here is "smarter" is an ambiguous word. I have no problem believing the average LLM has more knowledge than my brain; if that's what "smarter" means, them I'm happy to believe I'm stupid. But I sure doubt an LLM's ability to deduce or infer things, or to understand its own doubts and lack of knowledge or understanding, better than a human like me.

Verdex|2 months ago

Yeah my thought is that you wouldn't trust a brain surgeon who has read every paper on brain surgery ever written but who has never touched a scalpel.

Similarly, the claim is that ~90% of communication is nonverbal, so I'm not sure I would trust a negotiator who has seen all of written human communication but never held a conversation.

Marha01|2 months ago

> a lot of people seem to see LLMs as smarter than themselves

Well, in many cases they might be right..

roenxi|2 months ago

As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.

chrz|2 months ago

So tired of this argument.

cortic|2 months ago

> ChatGPT (o3): Scored 136 on the Mensa Norway test in April 2025

So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.

kylecazar|2 months ago

Maybe it's just my circle, but anecdotally most of the non-CS folks I know have developed a strong anti-AI bias. In a very outspoken way.

If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.

Workaccount2|2 months ago

A core problem with humans, or perhaps it's not even a problem, just something that takes a long time to recognize, is that they complain and hate on something that they continue to spend money on.

Not like food or clothing, but stuff like DLC content, streaming services, and LLMs.

theoreticalmal|2 months ago

Why do people in your circle not like AI? I have similar a experience about friends and family not liking AI, but usually it’s due to water and energy reasons, not because of an issue with the model reasoning

roenxi|2 months ago

AIs are an obvious threat to their ability to make money off their skills.

KronisLV|2 months ago

> a lot of people seem to see LLMs as smarter than themselves

I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?

Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.

That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.

saghm|2 months ago

I think you're spot on here. It's the same idea as scammers and con artists; people can be convinced of things that they might rationally reject if the language is persuasive enough. This isn't some new exploit in human behavior or an epidemic of people who are less intelligent than before; we've just never had to deal with the amount plausible enough sounding coherent human language being almost literally unlimited before. If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify (like how most of us hopefully don't need to be concerned their older relatives will transfer their bank account contents to benevolent foreign royalties with the expectation of being rewarded handsomely). It's hard to feel especially confident in this though given how much more open-ended the potential deceptions are (without even getting into the question of "intent" from the models or the creators of them).

charcircuit|2 months ago

AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.

eCa|2 months ago

Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.

opan|2 months ago

It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.

godelski|2 months ago

  > the breadth of knowledge
knowledge != intelligence

If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.

saghm|2 months ago

Even if we were going to accept the premise that total knowledge is equivalent to intelligence (which is silly, as sibling comments have pointed out), shouldn't accuracy also come into play? AI also says a lot more obviously wrong things than the average person, so how do you weight that against the purported knowledge? You could answer yes or no randomly to any arbitrary question about whether something is true and approximate a 50% accuracy rate with an evenly distributed pool of questions, but that's obviously not proof that you know everything. I don't think the choice of where to draw the line on "how often can you be wrong and have it still matter" is as easy as you're implying, or that everyone will necessarily agree on where it lies (even if we all agree that 50% correctness is obviously way too low).

zhoujianfu|2 months ago

AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.

krainboltgreene|2 months ago

Man, what are we supposed to do with people who think the above?

solumunus|2 months ago

Having knowledge is not exactly the same as being smart though is it.

gloosx|2 months ago

It's like saying google search is smarter than everyone, amount of information indexed by it has no human counterpart, such a silly take...