top | item 47100060

(no title)

donperignon | 8 days ago

an llm will never reason. reasoning is an emergent behavior of those systems that is poorly understood. neurosymbolic systems will be what combined with llm will define the future of AI

discuss

order

hackinthebochs|8 days ago

What are neurosymbolic systems supposed to bring to the table that LLMs can't in principle? A symbol is just a vehicle with a fixed semantics in some context. Embedding vectors of LLMs are just that.

logicprog|8 days ago

Pre-programmed, hard and fast rules for manipulating those symbols, that can automatically be chained together according to other preset rules. This makes it reliable and observable. Think Datalog.

IMO, symbolic AI is way too brittle and case-by-case to drive useful AI, but as a memory and reasoning system for more dynamic and flexible LLMs to call out to, it's a good idea.

theywillnvrknw|8 days ago

Slicing high dimensional concepts like 'reasoning' into discrete categories of 'will' and 'will not' ... will not work :P

simianwords|8 days ago

how do you falsify that "llm will never reason?"

I asked GPT to compute some hard multiplications and the reasoning trace seems valid and gets the answer right.

https://chatgpt.com/share/6999b72a-3a18-800b-856a-0d5da45b94...

donperignon|8 days ago

i dont need to. llm are probabilistic systems, they are not design to reason, and its actually the opossite nobody can explain some of the emergent behaviour they exhibit. will you let one of those to control the air traffic based on "black magic"? sometimes i have the feeling that we have forgot what scientific method is...

DiscourseFan|8 days ago

They can do some sort of reasoning, but not the way humans can

Zanthous|6 days ago

are people still participating in this charade of pretending llms cannot reason?