top | item 35426404

(no title)

mrzimmerman | 2 years ago

I think the difference there is that you can model a cargo ship to show that it will float, how it will do it, what configurations float and which don’t, etc. With LLMs like GPT we know how they work, but it’s hard to know that when it’s actually running and doing things, making it hard to predict what the outcome will be or perfectly understand how it came to the result it did (disclaimer: I am not an AI/ML engineer and this is just my personal understanding which could be wrong).

I think a lot of the annoyance felt around here is the tendency to apply human attributes to LLMs. It’s not to say that Chat GPT isn’t fulfilling the definition of “reasoning” on some level, even if it was found to fully meeting definition. I think it’s more the leap to the conclusion and that a lot of Hacker News readers are rankled by the leap lacking research and facts to back it up.

Anyway, it’s all really a philosophical discussion anyway since the definition of “intelligence” and “reason” are soft terms even when applied to human beings. We can usually hand wave it away as “intelligence is when humans do it the way humans do it”. Now that we’re on the cusp of creating actual artificial intelligence, I think we’re all finding those soft definitions cumbersome and are struggling to refine them as a society, which I think is a good thing. I also sometimes get rankled at someone asserting that some AI example has some human attribute, but mostly I think it’s an important part of that larger discourse about how we define these things, and that is a good thing in my mind.

discuss

order

No comments yet.