What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".
intended|2 months ago
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
simonjgreen|2 months ago
mc32|2 months ago
Are they not going to build a “skynet” in China? Second, building skynet doesn’t imply eviscerating youth employment.
On the other hand, automation of menial tasks does eviscerate all kinds of employment, not only youth emoloyment.
latentsea|2 months ago
dataflow|2 months ago
Verdex|2 months ago
Similarly, the claim is that ~90% of communication is nonverbal, so I'm not sure I would trust a negotiator who has seen all of written human communication but never held a conversation.
Marha01|2 months ago
Well, in many cases they might be right..
roenxi|2 months ago
chrz|2 months ago
computerthings|2 months ago
[deleted]
cortic|2 months ago
So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.
kylecazar|2 months ago
If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.
Workaccount2|2 months ago
Not like food or clothing, but stuff like DLC content, streaming services, and LLMs.
theoreticalmal|2 months ago
roenxi|2 months ago
catlover76|2 months ago
[deleted]
KronisLV|2 months ago
I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?
Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.
That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.
saghm|2 months ago
charcircuit|2 months ago
eCa|2 months ago
opan|2 months ago
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.
godelski|2 months ago
If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.
saghm|2 months ago
zhoujianfu|2 months ago
krainboltgreene|2 months ago
solumunus|2 months ago
gloosx|2 months ago