(no title)
wps
|
17 hours ago
Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.
jjmarr|17 hours ago
They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.
deaux|17 hours ago
I can't think of much else though so I'm still curious what you or others use it for.
idopmstuff|1 hour ago
I find there is a virtuous cycle here where the more I use it, the more helpful it is. I fired my bookkeeper and have been using Claude with a QBO API key instead, and because it already had that context (along with other related business context), when I gave it the tax docs I gave to my CPA for 2024's taxes plus my return, and asked it to find mistakes, it determined that he did not depreciate goodwill from an acquisition. CPA confirmed this was his error and is amending my return.
Then I thought it'd be fun to see how it would do on constructing my 2024 return just from the same source docs my CPA had. First time I did it, it worked for an hour then said it had generated the return, checked it against the 2024 numbers and found they're the same. I had removed the 2024 before having it do this to avoid poisoning the context with the answers, but it turns out it had a worksheet .md file that it was using on prior questions that I had not erased (and then it admitted that it had started from the correct numbers).
In order to make sure I wouldn't have that issue again, I tried the 2024 return again, completely devoid of any historical context in a folder totally outside of my usual Claude Code folder tree. It actually got my return almost entirely correct, but it missed the very same deduction that it had caught my CPA missing earlier.
So for me, the buildup of context over time is fantastic and really leads to better results.
AllegedAlec|17 hours ago
qwertox|17 hours ago
visarga|5 hours ago
Mashimo|15 hours ago
bouzouk|9 hours ago
7734128|16 hours ago
I know the "memory" function can be disabled, but I have a hard time seeing that it would ever really be useful.
cedws|14 hours ago
astrange|5 hours ago
pfix|17 hours ago
I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better.
This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff.
This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT.
When coding, this fails fast. There regular context resets seem to be a more viable strategy.
wps|17 hours ago
gverrilla|9 hours ago
jtokoph|17 hours ago
For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead.
It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets)
cyrusmg|17 hours ago
bmurphy1976|11 hours ago
That alone drives me batty. I can easily spend a couple hours and multiple revisions iterating on a plan. Asking me me every single time if I want to apply it is obnoxious.
Panoramix|6 hours ago
__alexander|11 hours ago
joenot443|10 hours ago
Similarly, it remembers the dimensions of my truck, so towing/loading questions don't need extra clarification.
It's the small things.
gbalduzzi|17 hours ago
Are you suggesting that they should ignore the needs of the vast majority of their users?
I mean, of course they do, it would be worse otherwise
wps|17 hours ago
MagicMoonlight|11 hours ago
And it will give me a complete rundown of Roman life, because it knows what I was interested in before.
Or you can ask a tax question and it will know you’re an organic rice farmer or whatever. Claude has the best implementation because it has both memory, and previous chat searching. So it will actually read through relevant chats, rather than guessing based on memories.
CGamesPlay|17 hours ago
wps|17 hours ago