top | item 42001388

(no title)

mitko | 1 year ago

This is so uncannily close to the problems we're encountering at Pioneer, trying to make human+LLM workflows in high stakes / high complexity situations.

Humans are so smart and do so many decisions and calculations on the subconscious/implicit level and take a lot of mental shortcuts, so that as we try to automate this by following exactly what the process is, we bring a lot of the implicit thinking out on the surface, and that slows everything down. So we've had to be creative about how we build LLM workflows.

discuss

order

haccount|1 year ago

Language seems to be confused with logic or common sense.

We've observed it previously in psychiatry(and modern journalism, but here I digress) but LLMs have made it obvious that grammatically correct, naturally flowing language requires a "world" model of the language and close to nothing of reality, spatial understanding? social clues? common sense logic? or mathematical logic? All optional.

I'd suggest we call the LLM language fundament a "Word Model"(not a typo).

Trying to distil a world model out of the word model. A suitable starting point for a modern remake of Plato's cave.

beardedwizard|1 year ago

I am baffled that people have to continue making this argument over and over and over. Your rationale makes total sense to me, but the debate rages on whether or not LLMs are more than just words.

Articles like this only seem to confirm that any reasoning is an illusion based on probabilistic text generation. Humans are not carefully writing out all the words of this implicit reasoning, so the machine cant appear to mimic them.

What am I missing that makes this debatable at all?

elif|1 year ago

Language is the tool we use to codify a heuristic understanding of reality. The world we interact with daily is not the physical one, but an ideological one constructed out of human ideas from human minds. This is the world we live in and the air we breath is made of our ideas about oxygenation and partly of our concept of being alive.

It's not that these "human tools" for understanding "reality" are superfluous, it's just that they ar second-order concepts. Spatial understandings, social cues, math, etc. Those are all constructs built WITHIN our primary linguistic ideological framing of reality.

PedroBatista|1 year ago

It’s in the name: Language Model, nothing else.

jumping_frog|1 year ago

There is a four part documentary by Stephen Fry called "Planet Word". Worth watching.

kbrisso|1 year ago

Bingo, great reply! This is what I've been trying to explain to my wife. LLM's use fancy math and our language examples to reproduce our language but have no thoughts are feelings.

TylerE|1 year ago

I sometimes wonder how they’d do if trained on relatively rigid, language like Japanese that has far fewer ambiguities than English.

repeekad|1 year ago

Hi I’m just a random internet stranger passing by and was intrigued by Plato’s Cave as I’m not a fancy person who reads books. GPT-4o expanded for you quite well, but I’m not sure how I feel about it…

Using AI how I just did feels like cheating on an English class essay by using spark notes, getting a B+, and moving right on to the next homework assignment.

On one hand, I didn’t actually read Plato to learn and understand this connection, nor do I have a good authority to verify if this output is a good representation of his work in the context of your comment.

And yet, while I’m sure students could always buy or loan out reference books to common student texts in school, AI now makes this “spark notes” process effectively a commodity for almost any topic, like having a cross-domain low-cost tutor instantly available at all time.

I like the metaphor that calculators did to math what LLMs will do for language, but I don’t really know what that means yet

GPT output:

“““ The reference to Plato’s Cave here suggests that language models, like the shadows on the wall in Plato’s allegory, provide an imperfect and limited representation of reality. In Plato’s Cave, prisoners are chained in a way that they can only see shadows projected on the wall by objects behind them, mistaking these shadows for the whole of reality. The allegory highlights the difference between the superficial appearances (shadows) and the deeper truth (the actual objects casting the shadows).

In this analogy, large language models (LLMs) produce fluent and grammatically correct language—similar to shadows on the wall—but they do so without direct access to the true “world” beyond language. Their understanding is derived from patterns in language data (“Word Model”) rather than from real-world experiences or sensory information. As a result, the “reality” of the LLMs is limited to linguistic constructs, without spatial awareness, social context, or logic grounded in physical or mathematical truths.

The suggestion to call the LLM framework a “Word Model” underscores that LLMs are fundamentally limited to understanding language itself rather than the world the language describes. Reconstructing a true “world model” from this “word model” is as challenging as Plato’s prisoners trying to understand the real world from the shadows. This evokes the philosophical task of discerning reality from representation, making a case for a “modern remake of Plato’s Cave” where language, not shadows, limits our understanding of reality. ”””

lolinder|1 year ago

This is a regression in the model's accuracy at certain tasks when using COT, not its speed:

> In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts.

In other words, the issue they're identifying is that COT is an less effective model for some tasks compared to unmodified chat completion, not just that it slows everything down.

mitko|1 year ago

Yeah! That's the danger with any kind of "model" whether it is CoT, CrewAI, or other ways to outsmart it. It is betting that a programmer/operator can break a large tasks up in a better way than an LLM can keep attention (assuming it can fit the info in the context window).

ChatGPT's o1 model could make a lot of those programming techniques less effective, but they may still be around as they are more manageable, and constrained.

1317|1 year ago

why are Pioneer doing anything with LLMs? you make AV equipment