top | item 42210979

(no title)

100ideas | 1 year ago

I found the opening quote of this article to be intriguing, especially since it was from a 1992 research lab:

“One year of research in neural networks is sufficient to believe in God.” The writing on the wall of John Hopfield’s lab at Caltech made no sense to me in 1992. Three decades later, and after years of building large language models, I see its sense if one replaces sufficiency with necessity: understanding neural networks as we teach them today requires believing in an immanent entity.

discuss

order

100ideas|1 year ago

Basically, as LLMs scale up, the author (Soatto, VP at AWS) suggests they're beginning to resemble Solomonoff inference: hypothetically optimal but computationally infinite approach that executes all possible programs to match observed data. Repeating this approach for any given question by definition gives the best answer, yet requires no learning, since the entire process can be repeated for any query (thanks to infinite computation).

The article develops a theoretical framework contrasting traditional inductive learning (which emphasizes generalization over memorization) with transductive inference (which embraces memorization and reasoning). Here's a quote:

"What matters is that LLMs are inductively trained transductive-inference engines and can therefore support both forms of inference.[2] They are capable of performing inference by inductive learning, like any trained classifier, akin to Daniel Kahneman’s “system 1” behavior — the fast thinking of his book title Thinking Fast and Slow. But LLMs are also capable of rudimentary forms of transduction, such as in-context-learning and chain of thought, which we may call system 2 — slow-thinking — behavior. The more sophisticated among us have even taught LLMs to do deduction — the ultimate test for their emergent abilities."

Sadly, the opening quote is not elucidated.