top | item 45561746

(no title)

paufernandez | 4 months ago

In my case I fully grasp what such a future could be, but I don't think we are on the path to that, I believe people are too optimistic, i.e. they just believe instead of being truly skeptical.

From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.

The most problematic things are:

1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)

2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.

3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.

We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.

Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.

discuss

order

ACCount37|4 months ago

Symbolic AI is dead. Either stop trying to dig out and reanimate its corpse, or move the goalposts like Gary Marcus did - and start saying "LLMs with a Python interpreter beat LLMs without, and Python is symbolic, so symbolic AI won, GG".

Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.

For how "naive" transformer LLMs seem, they sure set a high bar.

Saying "I know better" is quite easy. Backing that up is really hard.

maplethorpe|4 months ago

> Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.

Why is there no fundamental limitation that would prevent LLMs from matching human hallucination rates? I'd like to hear more about how you arrived at that conclusion.

CuriouslyC|4 months ago

Symbolic AI isn't dead, we use it all the time, it's just not a good orchestrating layer for interacting with humans. LLMs are great as a human interface and orchestrator but they're definitely going to be calling out to symbolic models for expanded functionality. This pattern is obvious, we're already on the path with agentic tool use and toolformers.

mikert89|4 months ago

symbols and concepts are just collections of neurons that fire with the correct activation. its all about the bitter lesson, human beings cannot design ai, they can only find the most general equations, most general loss function, and push data in. and thats what we have, and thats why its a big deal. The LLM is just a manifestation of a much broader discovery, a generalized learning algorithm. it worked on language because of the information density, but with more compute, we may be able to push in more general sensory data...

ogogmad|4 months ago

Not sure this is a good counterpoint in defence of LLMs, but I'm reminded of how Unix people explain why (in their experience) data should be encoded, stored and transmitted as text instead of something more seemingly natural like binary. It's because text provides more ways to read and transform it, IN SPITE of its obvious inefficiency. LLMs are the ultimate Unix text transformation filter. They are extremely flexible out-of-the-box, and friendly towards experimentation.

pllbnk|4 months ago

> We are just putting layer upon layer of complexity instead of simplifying.

It really irks me that the direction every player seems to be going to is to layer LLMs on top of each other with the goal of saving money on inference while still making the users believe that they are returning high quality results.

Instead of discovering some radical new ways of improving the algorithms they are only marginally improving existing architectures and even that is debatable.

pixl97|4 months ago

Symbolic AI is mostly dead, we spend a lot of time and money on it and got complex and fragile systems that are far worse than LLMs.