top | item 37768004

(no title)

andrewguenther | 2 years ago

I think LLMs are presenting some uncomfortable philosophical questions for people about how our own brains work and admitting that there is any kind of "intelligence" (even if very basic) in an LLM is an admission that our own brains may work in a similar manner.

discuss

order

YeGoblynQueenne|2 years ago

For me, one of the most interesting things that have come out of LLMs is the confirmation that humans are very bad at reasoning and, consequently, its' a very bad idea to try and make machines that "think like humans", because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers.

For instance -I'm not trying to be mean and I'm certainly not blaming you in particular, because I've seen this very often- but the reasoning that because LLMs can generate language, and humans can generate language not only LLMs are somehow like humans but also humans are like LLMs is not sound.

For example, walls have ears, cats have ears, therefore walls are like cats and cats are like walls. That doesn't work because walls' ears are not like cats' ears and even if they were, that still wouldn't make walls cats and cats walls, it would just make them both entities with ears.

corethree|2 years ago

>-I'm not trying to be mean and I'm certainly not blaming you in particular, because I've seen this very often- but the reasoning that because LLMs can generate language, and humans can generate language not only LLMs are somehow like humans but also humans are like LLMs is not sound.

Nah. Nobody personifies LLMs like this. What you're laying out here is a fundamental mistake that you'd have to be extremely stupid to make. I think barely anyone is making this mistake to even qualify mentioning it.

Seriously who here things that LLMs are anything like humans? That is not the claim. The claim is that LLMs understand you. Intelligence and understanding are clearly orthogonal to "human-like"

famouswaffles|2 years ago

>because that way we'll only make machines with none of the advantages of machines and all the disadvantages of computers.

There is no evidence, basically none whatsoever that general "perfect logical reasoning" is a thing that actually exists in the real world. None.

No animal we've observed does it. Humans certainly don't do it. The only realm this idea actually works is Fiction. and this was not like for a lack of trying. Some of the greatest minds worked on this for decades and some people still don't seem to get it. Logic doesn't scale. They break at real world relationships.

Logic systems are that guy in the stands yelling that he could've made the shot, while he's not even on the field.

andrewguenther|2 years ago

No offense taken. I tried to be clear in my phrasing that I don't personally subscribe to the "LLMs are just like us" mentality. Just making an observation as to why people have such visceral reactions to any implication that they might be.

corethree|2 years ago

This. I think this is the reason why so many people are in denial. Is All of intelligence simply trying to find the best fit curve in an n-dimensional scatter plot of data points?