top | item 43371920

(no title)

unification_fan | 11 months ago

That's like trying to stop a hemorrhage with a band-aid

Daily reminder that traditional AI expert systems from the 60s have 0 problems with hallucinations by virtue of their own architecture

Why we aren't building LLMs on top of ProbLog is a complete mystery to me (jk; it's because 90% of the people who work in AI right now have never heard of it; because they got into the field through statistics instead of logic, and all they know is how to mash matrices together).

Clearly language by itself doesn't cut it, you need some way to enforce logical rigor and capabilities such as backtracking if you care about getting an explainable answer out of the black box. Like we were doing 60 years ago before we suddenly forgot in favor of throwing teraflops at matrices.

If Prolog is Qt or, hell, even ncurses; then LLMs are basically Electron. They get the job done, but they're horribly inefficient and they're clearly not the best tool for the task. But inexperienced developers think that LLMs are this amazing oracle that solves every problem in the world, and so they throw LLMs at anything that vaguely looks like a problem.

discuss

order

WhitneyLand|11 months ago

Does your brain really tell you it’s more likely that 90% of people in the field are ignorant, rather than old expert systems were brittle, couldn't learn from data, required extensive manual knowledge editing, and couldn't generalize?

Btw as far as throwing teraflops, the ability to scale with compute is a feature not a bug.

earnestinger|11 months ago

It can be both. (Ignorant not as “idiots”, but as not experts and proponents of this particular niche)

jdaw0|11 months ago

people stopped making these systems because they simply didn't work to solve the problem

there's a trillion dollars in it for you if you can prove me wrong and make one that does the job better than modern transformer-based language models

ben_w|11 months ago

I think it's more that the old expert systems (AKA flow charts) did work, but required you to already be an expert to answer every decision point.

Modern LLMs solve the huge problem of turning natural language from non-experts into the kind of question an expert system can use… 95% of the time.

95% is fantastic if you're e.g. me with GCSE grade C in biology from 25 years ago, asking a medical question. If you're already a domain expert, it sucks.

I suspect that feeding the output of an LLM into an expert system is still useful, for much the same reason that feeding code from an LLM into a compiler is useful.

nickpsecurity|11 months ago

That assumes it can even be done. It's worth looking into. There have been some projects in those areas.

Mixing probabilistic logic with deep learning:

https://arxiv.org/abs/1808.08485

https://github.com/ML-KULeuven/deepproblog

Combining decision trees with neural nets for interpretability:

https://arxiv.org/abs/2011.07553

https://arxiv.org/pdf/2106.02824v1

https://arxiv.org/pdf/1806.06988

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-...

It looks like model transfer from uninterpretable, pretrained models to interpretable models is the best strategy to keep using. That also justifies work like Ai2's OLMo model where all pretraining data is available to use other techniques, like those in search engines, to help explainable models connect facts back to source material.

imoreno|11 months ago

> Why we aren't building LLMs on top of ProbLog

> they got into the field through statistics instead of logic

LLMs by definition are built from neural networks, which indeed work via "mashing matrices" rather than "logic". That's the axiom of the technique. Sounds like you're saying let's throw away half a century of progress and start from scratch with a completely different direction. Maybe it will work, but who's gonna do all of that? I doubt that vague heckling from random comment threads will succeed in convincing researcher to commit to multiple lifetimes of work.

Instead of trying to reinvent LLMs, it would be more practical to focus on preprocessing input (eg. RAG) and postprocessing output (eg. detecting hallucination and telling the model to improve it before returning results to the user). This is something where something like using ProbLog might conceivably produce an advantage. So if you really want to rehabilitate Prolog to the field, why don't you go ahead and develop some LLM program in it and everyone can see for themselves how much better it is?

xpe|11 months ago

The answer to too much exaggeration about AI from various angles is _not_ more exaggeration. I get the frustration, but exaggerated ranting isn’t intellectually honest nor effective. The AI + software development ecosystem and demographics are broad enough that lots of people agree with many of your points. Sure, there are lots of people on a hype train. So help calm it down.

amelius|11 months ago

Probably because translating natural language into logic form isn't very easy, and also the point where this approach breaks down.

hhh|11 months ago

‘expert systems’ are logic machines

npiano|11 months ago

This assumes that logic is derived from predicting the next step from previous information, which is not accurate.

Bluestein|11 months ago

This is tremendously cogent.-