top | item 47204724

(no title)

wps | 17 hours ago

Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.

discuss

order

jjmarr|17 hours ago

Most normal people want the LLM to remember their interests and favourite things, so they don't have to manually re-explain when asking for advice.

They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.

deaux|17 hours ago

Do you have example prompts where this would be usual? Why would you want an LLM to know your favorite type of cheese? Now that I say that, I guess if you use it for recipes then it's useful if it remembers things like dietary restrictions. And even then a project seems like the better option.

I can't think of much else though so I'm still curious what you or others use it for.

idopmstuff|1 hour ago

I use Claude code in a number of different parts of my business - coding internal applications, acting as a direct interface to SaaS via APIs and just general internal use.

I find there is a virtuous cycle here where the more I use it, the more helpful it is. I fired my bookkeeper and have been using Claude with a QBO API key instead, and because it already had that context (along with other related business context), when I gave it the tax docs I gave to my CPA for 2024's taxes plus my return, and asked it to find mistakes, it determined that he did not depreciate goodwill from an acquisition. CPA confirmed this was his error and is amending my return.

Then I thought it'd be fun to see how it would do on constructing my 2024 return just from the same source docs my CPA had. First time I did it, it worked for an hour then said it had generated the return, checked it against the 2024 numbers and found they're the same. I had removed the 2024 before having it do this to avoid poisoning the context with the answers, but it turns out it had a worksheet .md file that it was using on prior questions that I had not erased (and then it admitted that it had started from the correct numbers).

In order to make sure I wouldn't have that issue again, I tried the 2024 return again, completely devoid of any historical context in a folder totally outside of my usual Claude Code folder tree. It actually got my return almost entirely correct, but it missed the very same deduction that it had caught my CPA missing earlier.

So for me, the buildup of context over time is fantastic and really leads to better results.

AllegedAlec|17 hours ago

In online Claude I often use incognito mode precisely because I don't want results to be influenced by what we talked about earlier. It's getting rather annoying to be honest.

qwertox|17 hours ago

Keep your user prefs minimal and use project memory instead: create a new project, it will only have access to your user prefs, everything else is fresh.

visarga|5 hours ago

I'm switching from Claude Web to Claude Code. Local files give me memory I actually control, unlike Anthropic's implementation. CC doesn't carry state between sessions — you just put whatever project context it needs in a file.

Mashimo|15 hours ago

Why not turn it off then?

bouzouk|9 hours ago

On the contrary, I cannot understand how people are seriously using LLM outside of software engineering without account-wide memory. When I ask things like "what do you think John should do next on project A?", I don’t want to have to explain in detail who is John, what is project A and what John was working on before.

7734128|16 hours ago

The few times I've switched over to chatGPT I've been dumbfounded by lines like "...since you already are using SQLite...", referring to projects from months ago.

I know the "memory" function can be disabled, but I have a hard time seeing that it would ever really be useful.

cedws|14 hours ago

Yeah for me it only ever polluted the context. Irrelevant information tends to oversteer the LLM and produce worse output.

astrange|5 hours ago

Gemini is terrible with personalization. It brings up everything in my bio nonstop no matter what the topic is.

pfix|17 hours ago

I can try!

I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better.

This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff.

This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT.

When coding, this fails fast. There regular context resets seem to be a more viable strategy.

wps|17 hours ago

I see what you mean, but I like having a clean slate even for those one off questions. I don’t want a differing answer to a philosophical inquiry just because the LLM remembers a prior position I’ve written about you know?

gverrilla|9 hours ago

It all depends on your usecase(s). For me, "account-wide" memory has only: (a) short description of my hardware/os/display system/etc; (b) mobile hardware and os version; and (c) my age, gender, city/country of residence, and health conditions.

jtokoph|17 hours ago

I've told the LLMs that, when traveling, I don't care about nightlife and alcohol. Because they have a memory of this, when I ask for a sample itinerary for a 2 day stay in a new city, it won't waste hours in the day on the party street, wine tasting, etc.

For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead.

It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets)

cyrusmg|17 hours ago

So it's because they want to be more like ChatGPT instead of being more Claude Code. I guess that makes sense - bigger market

bmurphy1976|11 hours ago

"Stop asking me to apply the plan. I will tell you when I'm ready."

That alone drives me batty. I can easily spend a couple hours and multiple revisions iterating on a plan. Asking me me every single time if I want to apply it is obnoxious.

Panoramix|6 hours ago

Think of things like your preferred units (meters, kg, cups, tablespoons, milliliters). Or, do not suggest recipes with x ingredient. Language preferences. Etc etc etc.

__alexander|11 hours ago

The appeal for me is not having to constantly repeat instructions. Imagine having to repeat dietary restrictions every time you ask for a recipe.

joenot443|10 hours ago

I own a lot of dirt bikes, boats, snowmobiles, mowers, and blowers. It's much easier for me to ask about "My Polaris" than it is to ask about my "2011 Polaris Switchback Assault".

Similarly, it remembers the dimensions of my truck, so towing/loading questions don't need extra clarification.

It's the small things.

gbalduzzi|17 hours ago

> it seems to cater to how the masses use these tools.

Are you suggesting that they should ignore the needs of the vast majority of their users?

I mean, of course they do, it would be worse otherwise

wps|17 hours ago

Well, the masses are wrong. See: insane amounts of compute wasted on “thank you”, “haha true”, “redo it”, etc. I think the UI should be designed to avoid misuse, and I think an ever growing distillation of your most common traits is not a good use of context length. If you want it, specify it. Maybe even hard limits on chat length, why are we 20 replies deep in a single chat? A user friendly option could be a single button that distills that chat down, and opens a new one with prebuilt instructions to continue the conversation. I’m no product designer though, just some thoughts.

MagicMoonlight|11 hours ago

Because I can say “do what you did before, but about the romans this time”

And it will give me a complete rundown of Roman life, because it knows what I was interested in before.

Or you can ask a tax question and it will know you’re an organic rice farmer or whatever. Claude has the best implementation because it has both memory, and previous chat searching. So it will actually read through relevant chats, rather than guessing based on memories.

CGamesPlay|17 hours ago

Sure, it's for those customers who don't have any idea what a "context window" is.

wps|17 hours ago

This seems to imply that customers assume by default that the LLM remembers their past chats? I feel like the UI makes it incredibly obvious it’s a clean slate every time? But then again people ask ridiculous meta questions all the time to these chatbots expecting a correct answer.