top | item 45195983

(no title)

ClaraForm | 5 months ago

I mean an LLM (bad example, but good enough for what I'm trying to convey) doesn't need any sort of "memory" to be able to reconstruct something that looks like intelligence. It stores weights, and can re-assemble "facts" from those weights, independent of the meaning or significance of those facts. It's possible the brain is similar, on a much more refined scale. My brain certainly doesn't store 35,000 instances of my mum's image to help me identify her, just an averaged image to help me know when I'm looking at my mum.

The brain definitely stores things, and retrieval and processing are key to the behaviour that comes out the other end, but whether it's "memory" like what this article tries to define, I'm not sure. The article makes it a point to talk about instances where /lack/ of a memory is a sign of the brain doing something different from an LLM, but the brain is pretty happy to "make up" a "memory", from all of my reading and understanding.

discuss

order

ckemere|5 months ago

Addressing the second paragraph here - while conflation and reconsolidation are real phenomena, it's also quite clear that most humans form episodic memories. Some quite clearly have incredible abilities in this regard [2].

A distinction between semantic (facts/concepts) and episodic (specific experiences) declarative memories are fairly well established since at least the 1970s. That the latter is required to construct the former is also long posited, with reasonable evidence [1]. Similarly, there's a slightly more recent distinction between "recollecting" (i.e., similar to the author's "I can remember the event of learning this") and "knowing" (i.e., "I know this but don't remember why"), with differences in hypothesized recall mechanisms [3].

[1] https://www.science.org/doi/full/10.1126/science.277.5324.33... or many other reviews by Eichenbaum, Squire, Milner, etc

[2] https://youtu.be/hpTCZ-hO6iI?si=FeFv8MGmHTzkLd8p

[3] https://psycnet.apa.org/record/1995-42814-001

pipularpop|5 months ago

"That the latter is required to construct the former is also long posited, with reasonable evidence [1]."

This is very interesting to me. I have temporal lobe epilepsy. My episodic memory is quite poor. However, I believe I'm fairly good at learning new facts (i.e. semantic memory). Perhaps my belief is an illusion, or I'm really only learning facts when my episodic memory is less impaired (which happens; it varies from hour to hour). It's difficult for me to tell of course.

mallowdram|5 months ago

Is the idea cars start a fact or a concept? Is the certainty I remember how this particular car starts a fact?

Once we begin to disengage from the arbitrariness inherent in arbitrary metaphors, and rely on what actually generates memories (action-neural-spatial-syntax), we can study what's really happening in the allocortex's distribution of cues between sense/emotion into memory.

Until then we will simply be trapped in falsely segregated ideas of episodic/semantic.

HarHarVeryFunny|5 months ago

The article isn't about LLMs storing things - it's about why they hallucinate, which is in large part due to the fact that they just deal in word statistics not facts, but also (the point of the article) that they have no episodic memories, or any personal experience of any sort for that matter.

Humans can generally differentiate between when they know something or not, and I'd agree with the article that this is because we tend to remember how we know things, and also have different levels of confidence according to source. Personal experience trumps watching someone else, which trumps hearing or being taught it from a reliable source, which trumps having read something on Twitter or some grafitti on a bathroom stall. To the LLM all text is just statistics, and it has no personal experience to lean to to self-check and say "hmm, I can't recall ever learning that - I'm drawing blanks".

Frankly it's silly to compare LLMs (Transformers) and brains. An LLM was only every meant to be a linguistics model, not a brain or cognitive architecture. I think people get confused because if spits out human text and so people anthropomorphize it and start thinking it's got some human-like capabilities under the hood when it is in fact - surprise surprise - just a pass-thru stack of Transformer layers. A language model.

ClaraForm|5 months ago

Hey, I know what the article wanted to say, see the last two-ish sentences of my previous response. My point, is that the article might be mis-interpreting what the causes and solutions for the problems it sees. Relying on the brain as an example of how to improve might be a mistaken premise, because maybe the brain isn't doing what the article thinks it's doing. So we're in agreement there, that the brain and LLMs are incomparable, but maybe the parts where they're comparable are more informative on the nature of hallucinations than the author may think.

DavidSJ|5 months ago

> An LLM was only every meant to be a linguistics model, not a brain or cognitive architecture.

See https://gwern.net/doc/cs/algorithm/information/compression/1... from 1999.

Answering questions in the Turing test (What are roses?) seems to require the same type of real-world knowledge that people use in predicting characters in a stream of natural language text (Roses are ___?), or equivalently, estimating L(x) [the probability of x when written by a human] for compression.

RLAIF|5 months ago

[deleted]