top | item 43817704

(no title)

jasonshen | 10 months ago

With ChatGPT explicitly storing "memory" about the user and access to the history of all chats, that can also change. Not hard to imagine an AI-powered IDE like Cursor understanding that when you reran a prompt or gave it an error message it came to understand that its original result was wrong in some way and that it needs to "learn" to improve its outputs.

discuss

order

nottorp|10 months ago

Human memory is new neural paths.

LMM "memory" is a larger context with unchanged neural paths.

theK|10 months ago

Maybe. I'd wager the next couple of generations of inference architecture will still have issues with context on that strategy. Trying to work with the state of the art models at their context boundaries quickly descends into gray goop like behavior for now and I don't see anything on the horizon that changes that rn.