top | item 47204571

Switch to Claude without starting over

502 points| doener | 13 hours ago |claude.com

235 comments

order

wps|13 hours ago

Could someone explain the appeal of account-wide memory to me? Anthropic’s marketing indicates that nothing bleeds over, but I’m just so protective of my context that I cannot imagine having even a majorly distilled version of my other chats and preferences having on weight on the output. As for certain preferences like code styling or response length, these are all fit for custom instructions, with more detailed things in Skills. Ultimately like many things in LLM web UX, it seems to cater to how the masses use these tools.

jjmarr|12 hours ago

Most normal people want the LLM to remember their interests and favourite things, so they don't have to manually re-explain when asking for advice.

They also don't know what "context" is or that the LLM has a limited number of tokens it can understand at any given time. They just believe it knows everything at once.

AllegedAlec|12 hours ago

In online Claude I often use incognito mode precisely because I don't want results to be influenced by what we talked about earlier. It's getting rather annoying to be honest.

bouzouk|4 hours ago

On the contrary, I cannot understand how people are seriously using LLM outside of software engineering without account-wide memory. When I ask things like "what do you think John should do next on project A?", I don’t want to have to explain in detail who is John, what is project A and what John was working on before.

gverrilla|5 hours ago

It all depends on your usecase(s). For me, "account-wide" memory has only: (a) short description of my hardware/os/display system/etc; (b) mobile hardware and os version; and (c) my age, gender, city/country of residence, and health conditions.

Panoramix|1 hour ago

Think of things like your preferred units (meters, kg, cups, tablespoons, milliliters). Or, do not suggest recipes with x ingredient. Language preferences. Etc etc etc.

7734128|11 hours ago

The few times I've switched over to chatGPT I've been dumbfounded by lines like "...since you already are using SQLite...", referring to projects from months ago.

I know the "memory" function can be disabled, but I have a hard time seeing that it would ever really be useful.

pfix|12 hours ago

I can try!

I currently use ChatGPT for random insights and discussions about a variety of topics. The memory is basically a grown context about me and my preferences and interests and ChatGPT uses it to tailor responses to my knowledge, so I could relate better.

This is for me far more natural and easier than either craft a default prompt preset or create each conversation individually, that would be way too much overhead to discuss random shower thoughts between real life stuff.

This is my use case and I discovered that this can be detrimental to specific questions and prompts and I see that it can be more beneficial to have careful written prompts each time. But my use case is really ad hoc usage without the time. At least for ChatGPT.

When coding, this fails fast. There regular context resets seem to be a more viable strategy.

bmurphy1976|7 hours ago

"Stop asking me to apply the plan. I will tell you when I'm ready."

That alone drives me batty. I can easily spend a couple hours and multiple revisions iterating on a plan. Asking me me every single time if I want to apply it is obnoxious.

jtokoph|12 hours ago

I've told the LLMs that, when traveling, I don't care about nightlife and alcohol. Because they have a memory of this, when I ask for a sample itinerary for a 2 day stay in a new city, it won't waste hours in the day on the party street, wine tasting, etc.

For example, instead of recommending a popular night club, it will recommend the stroll along the river to view the lit up skyline or to visit the night market instead.

It knows other preferences as well (exploring quirky neighborhoods, trying local fast food joints and markets)

joenot443|5 hours ago

I own a lot of dirt bikes, boats, snowmobiles, mowers, and blowers. It's much easier for me to ask about "My Polaris" than it is to ask about my "2011 Polaris Switchback Assault".

Similarly, it remembers the dimensions of my truck, so towing/loading questions don't need extra clarification.

It's the small things.

__alexander|7 hours ago

The appeal for me is not having to constantly repeat instructions. Imagine having to repeat dietary restrictions every time you ask for a recipe.

gbalduzzi|12 hours ago

> it seems to cater to how the masses use these tools.

Are you suggesting that they should ignore the needs of the vast majority of their users?

I mean, of course they do, it would be worse otherwise

MagicMoonlight|6 hours ago

Because I can say “do what you did before, but about the romans this time”

And it will give me a complete rundown of Roman life, because it knows what I was interested in before.

Or you can ask a tax question and it will know you’re an organic rice farmer or whatever. Claude has the best implementation because it has both memory, and previous chat searching. So it will actually read through relevant chats, rather than guessing based on memories.

CGamesPlay|13 hours ago

Sure, it's for those customers who don't have any idea what a "context window" is.

xrd|7 hours ago

The prompt you can copy is this:

  I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.
Why wouldn't a smart OpenAI PM simply add something "nefarious" on the frontend proxy to "slow down" any requests with exactly that prompt?

I bet they would get their yearly bonus by achieving their KPI goals.

dimitri-vs|5 hours ago

I think they already are. When I used the prompt with 5.2 it gives very concise and general info but if you use older models (5.1 instant or o3) you get a ton of detail.

MagicMoonlight|6 hours ago

They can, but then you could tell it to “don’t not do what I’m asking” and force it through. It’s not exactly “programming” with these systems, it’s all just slop.

And the reputational harm would outweigh the benefits of trying to fuck over people leaving.

siliconc0w|2 hours ago

I switched to Claude but the token efficiency and limits are much more noticeable. One or two coding questions and I'm at my session limit. And that is shared with chat too.

I was mostly able to get by with $20 codex but I'll probably have to splurge for the Max plan.

vgalin|2 hours ago

> And that is shared with chat too.

Huh, I didn't know about that. I'm trying Claude Pro for the first time while comparing it against ChatGPT and I'm (sadly) not impressed at the moment.

When I asked both Codex and Claude Code to "look into" an issue of medium-to-high-complexity in a code base, Codex went with the fix I had in mind and directly and made code changes without being asked or at least asking for permission. It only used a few percents of its 5-hour limits to do it, on `High`.

Claude in the meanwhile misdiagnosed the core of the issue on its first pass (even on Opus 4.6 + Thinking). I had to guide it in the right direction and despite being given the 'answer', it was quite a long process compared to Codex' one-shot. And it hit the 5h limit before being able to finish solving the issue.

elAhmo|2 hours ago

Hmm, I had the opposite experience when I tried Codex 5.2 after using Claude for almost a year. Codex was on par or better for me at coding, and seemingly a magnitude cheaper.

outlore|12 hours ago

I tried all of Codex, OpenCode, Claude Code and Cursor these past few weeks. It was surprising to me that all of them have slightly different conventions for where to put skills, how to format MCP servers (how environment variables need to be specified etc), what the AGENTS/CLAUDE file needs to be called, what plugins/marketplaces are...it's a big mess for anyone trying to have a portable config in their dotfiles that can universally apply to any current and future agent.

It also showed me the difference between expectation and reality...even though these are billion dollar companies, they still haven't figured out how to make lag-free TUIs, non-Electron apps, or even respect XDG_CONFIG. The focus is definitely more on speed and stuffing these tools full of new discoveries and features right now

There's a bit of psychology around models vs. harnesses as well. You can't shake off the feeling that maybe Claude would perform better in its native harness compared to VSCode/OpenCode. Especially because they've got so many hidden skills (like the recently introduced /batch), that seem baked into the binary?

The last thing I can't figure out is computer use. Apparently all the vendors say that their models can use a mouse and keyboard, but outside of the agent-browser skill (which presumably uses playwright), I can't figure out what the special sauce is that the Cloud versions of these Agents are using to exercise programs in a VM. That is another reason why there is a switching cost between vendors.

jspdown|35 minutes ago

I've been using Claude for a little over a year, but the recent events with DoW are making me want to explore European alternatives. I'm willing to give Devstral 2 a try, but I'm not sure what to expect. In terms of tool calling and coding abilities, should I expect something closer to Sonnet 3.5 or to Sonnet 4.5?

brikym|12 hours ago

Hey Anthropic, how about you use AGENTS.md for one thing.

Sammi|10 hours ago

Before this week I was sure Anthropic were actually just as soulless as OpenAi, just because they don't support open standards like AGENTS.md and /.agents/skills. They can so easily win the support of the open source crowd if they just support open standards like these.

The /.agents/skills issue for claude code is here: https://github.com/anthropics/claude-code/issues/16345

Their automatic close bot will close it soon as it's been three weeks since the last comment.

2001zhaozhao|12 hours ago

Just make a symlink of CLAUDE.md -> AGENTS.md

I have seen quite a few open source projects do this. It works quite well.

Another alternative is to create CLAUDE.md with the exact contents: "@AGENTS.md"

tomComb|4 hours ago

I felt that way too, until I noticed how different their schemes are for discovering these files, e.g. Claude will pick up context files in parent folders, and Codex doesn’t.

Maybe it’s better that they maintain different names to prevent people from assuming that they work the same

deaux|12 hours ago

Now that would make it easier for Codex users to switch indeed! This seems like the best timing for it they're ever gonna get, and worth the ultra tiny loss of marketing value their "CLAUDE.md" naming provides.

For the Anthropic employees here reading along, pitch it to whoever has kept blocking this, because you need to get the most out of this opportunity here.

Handy-Man|6 hours ago

Why would they? They were first with CLAUDE.md. Others could have adopted to that if they wanted. Don’t see a reason for Claude to change their approach.

Joeri|12 hours ago

I already switched to claude a while ago. Didn’t bring along any context, just switched subscriptions, walked away from chatgpt and haven’t touched it again. Turned out to be a non-event, there really is no moat.

I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.

kdheiwns|10 hours ago

Yesterday was my first time trying it. One thing that felt a bit strange to me was that I asked it something and the response was just one paragraph. Which isn't bad or anything but it felt... strange? Like I always need to preface ChatGPT/gemini/whatever question with "Briefly, what is..." or it gives me enough fluff to fill a 5 page high school essay. But I didn't need to do that and just got an answer that was to the point and without loads of shit that's barely related.

And the weirdest thing that I noticed: instead of skimming the response to try finding what was relevant, I just straight up read it. Kind of felt like I got a slight amount of focus ability back.

Accuracy is something I can't really compare yet (all chatbots feel generally the same for non-pro level queries), but so far, I'm fairly satisfied.

KellyCriterion|12 hours ago

> there really is no moat.

For ChatGPT and Gemini, yes.

But for Claude, they have a very deep & big one: Its the only model that gets production ready output on the first detailled prompt. Yesterday I used my tokens til noon, so I tried some output from Gemini & Co. I presented a working piece of code which is already in production:

1. It changed without noticing things like "Touple.First.Date.Created" and "Touple.Second.Date.Created" and it rendered the code unworking by chaning to "Touple.FirstDate" and "Touple.SecondDate"

2. There was a const list of 12 definitions for a given context, when telling to rewrite the function it just cut 6 of these 12 definitions, making the code not compiling - I asked why they were cut: "Sorry, I was just too lazy typing" ?? LOL

3. There is a list include holding some items "_allGlobalItems" - it changed the name in the function simply to "_items", code didnt compile

As said, a working version of a similar function was given upfront.

With Claude, I never have such issues.

crossroadsguy|10 hours ago

I wrote off ChatGPT/OpenAI because of Sam Altman and those eyeball scan things - so sort of even before all this was a rage and centre stage. Sometimes it's just the gut feeling, and while it may not always be accurate, if something doesn't "feel" right, maybe it is not right. No one else is all good either, but what I mean to say is there are some entities/people who repeatedly don't feel right, have things attached to them that never felt right, etc., and you get a combined "gut feeling". At least that's how it was for me.

Buttons840|5 hours ago

I love no mote!

One day I'd like to create a server in my basement that just runs a few really really nice models, and then get some friends and CO workers to pay me $10 a month for unlimited access.

All with the understanding that if you hog the entire server I'm going to kick you off, and if you generate content that makes the feds knock on my door I'm turning over the server logs and your information. Don't be an idiot, and this can be a good thing between us friends.

It would be like running a private Minecraft server. Trust means people can usually just do what they want in an unlimited way, but "unlimited" doesn't necessarily mean you can start building an x86 processor out of redstone and lagging the whole server. And you can't make weird naked statues everywhere either.

Usually these things aren't issues among a small group. Usually the private server just means more privacy and less restriction.

jacquesm|11 hours ago

> I’m pretty sure they would allow AGI to be used for truly evil purposes

It's perfectly possible that 'truly evil purposes' were the goal all along. Slogans and ethics departments are mere speed bumps on the way to generational wealth.

rustyhancock|12 hours ago

I know this is necessarily a very unpopular opinion however.

I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.

Even being generous they are only very minimally a "better actor" than OpenAI.

However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.

Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.

All Anthropic have said is:

1. No mass domestic surveillance of Americans.

2. No fully autonomous lethal weapons yet.

My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?

bossyTeacher|10 hours ago

I tried Claude recently (after they dropped the nonsensical requirement to give them your phone number) and I was surprised to see how significantly less sycophant it was. Chatgpt, unless you are talking hard science, tends to be overly agreeable. Claude questions you a lot (you ask for x and it asks you stuff like: why are you interested in x, or based on our previous convo, x might not be suitable to you, or I see your point but based on our previous convo, y is better than x, etc). Chatgpt rarely does that.

Of course, also OpenAI being ran by openly questionable people while Dario so far doesn't seem nowhere near as bad even if none of them are angels.

mannanj|7 hours ago

Yes they have a great marketing team and a powerful astro turfing presence though, especially with the recent "Claude beat up OpenClaw! OpenAI is supporting the community by buying it!" and that nonsense.

Though tbh I hardly feel Claude is innocent either. When their safety engineer/leader left, I didn't see any statements from the Anthropic team not one addressing the legitimate points of his for why he left. Instead we got an eager over-push in the media cycle of "Anthropic standing up to DOD! Here's why you can trust us!"

It's all sounds too similar to propaganda and astroturfing to me.

samiv|9 hours ago

I did the same thing and cancelled my OpenAI plan today. Besides boycotting it for their latest grifting I also found it to not really produce much value in my use cases.

Moving back to doing this archaic thing called using my own brain to do my work. Shocking.

Gooblebrai|11 hours ago

Claude still doesn't have image generation?

neya|12 hours ago

[deleted]

bko|5 hours ago

I never understood the point of this kind of comment. It doesn't add any value or anything to the discussion. Its basically two paragraphs with some presupposition (openai bad) and how the author is virtuous by canceling his subscription. No explanation, argument, nuance. Its just virtue signaling. Actually... I guess I do know the point of this kind of comment. I just don't know why these kinds of comments get upvoted, even if you do agree openai bad

joshstrange|8 hours ago

I’m pretty divided on “memory”. There are times it can feel almost magical but more often than not I feel like I am fighting with the steering wheel.

Whenever I’m in a conversation and it references something unrelated (or even related) I get the “ick”. I know how context poisoning (intentional or not) works and I work hard to only expose things to the model that I want it to consider.

There have been many times that I’ve started a fresh chat as to not being along the baggage (or wrong turns) of a previous chat but then it will say “And this should work great for <thing I never mentioned in THIS chat>” and at that moment my spidey-sense tingles and I start wondering “Crap, did it come to the conclusion it did based mostly/only on the new context or did it “take a shortcut” and use context from another chat?

Like I said, I go out of my way to not “lead the witness” and so when the “witness” can peek at other conversations, all my caution is for naught.

I encourage everyone to go read the saved memories in their LLM of choice, I’ve cleaned out complete crap from there multiple times. Actually wrong information, confusing information, or one-off things I don’t want influencing future discussions.

The custom (or rather addition to the) system prompt is all I feel comfortable with. Where I give it some basic info about the coding language I prefer and the OSes that I’m often working with so that I don’t have to constantly say “actually this is FreeBSD” or “please give that to me in JS/TS instead of Python”.

The only thing that has, so far, kept me from turning off memory is that I’m always slightly cautious of going off the beaten path for something so new and moving so fast. I often want to have as close to the “stock” config since I know how testing/QA works at most places (the further off the beaten path you, the more likely you’ll run into bugs). Also so that I can experience when everyone else is experiencing (within reason).

Lastly, because, especially with LLMs, I feel like the people that over customize end up with a fragile systems. I think that a decent portion of the “N+1 model is dumber” or “X model has really gone downhill” is partially due to complicated configs (system prompts, MCP, etc) that might have helped at some point (dumber model, less capability) but are a hindrance to newer models. That or they never worked and someone just kept piling on more and more thinking it would help.

rudedogg|4 hours ago

I've been thinking this too. I frequently do deep research on some systems programming technique, ask it to generate a .md for it, and then I use that in later sessions with Claude Code "look at the research I collected in {*-research}.md and help me explore ways to apply it to {thing}".

At the research step it frequently (always?) uses memory to direct/scope the research to what I typically work on, but I think that kind of pigeon holes the model and what it explores. And the memory doesn't quite capture all the areas I'm interested in, or want to directly apply the research to.

And regarding the crap in memories, I found the same. Mine at work mentioned I'm an expert at a business domain I have almost zero experience with.

I feel like the companies building this stuff accept a lot of "slop" in their approach, and just can't see past building things by slopping stuff into prompts. I wish they'd explore more rigid approaches. Yes, I understand "the bitter lesson" but it seems obvious to me some traditional approaches would yield better results for the foreseeable future. Less magic (which is just running things through the cheapest model they have and dumping it in every chat). It seems like poison.

Related: https://vercel.com/blog/agents-md-outperforms-skills-in-our-...

Also, agent skills are usually pure slop. If you look through https://skills.sh on a framework/topic you're knowledgeable in you'll be a bit disheartened. This stuff was pioneered by people who move fast, but I think it's now time to try and push for quality and care in the approach since these have gotten good enough to contribute to more than prototype work.

knotbin|1 hour ago

Weird to push this feature as if it's for new users when it only works if you already have a Pro subscription

hoytschermerhrn|28 minutes ago

I'm sure this was a hastily put-together response to the growing calls to delete ChatGPT.

peteforde|10 hours ago

I got very excited when I saw this title, because I've wanted to consolidate on Claude for a long time. I have been using ChatGPT very extensively for Q&A for 2+ years and I have hundreds of long, very technical conversations which I constantly search and refer to.

The problem (for me, anyway) is that even several megabytes worth of quality "memory" data on my profile would not allow me to migrate if it can't also confidently clone all of my chat history with it.

To be clear, this is a big enough problem that I would immediately pay low three digits dollars to have this solved on my behalf. I don't really want any of the providers to have a walled garden of all my design planning conversations, all of my PCB design conversations. Many are hundreds of prompts long. A clean break is not even remotely palatable short of OAI going full evil.

Look, I'd find it convenient for Claude to have a powerful sense of what I've been working on from conversation #1 onwards. But I absolutely refuse to bifurcate my chat history across multiple services. There is a tier list of hells, and being stuck on ChatGPT is a substantially less painful tier than needing to constantly search two different sites for what's been discussed.

lxgr|10 hours ago

This should in theory be solveable by using a custom frontend and only using the various backend APIs as stateless inference providers, but everything I've tested falls flat on a few aspects: Chat history RAG and web search, and to a lesser extent tool use.

Yes, all of these are theoretically possible (the APIs now all support web search, as far as I know, there are RAG APIs too, and tool use has been supported for a while), but the various "chat" models just seem to be much better at using their first-party tools than any third-party harness, which makes sense that this is what they've been trained on.

khasan222|7 hours ago

It was amazing to me how bad cursor is with using the same model I use in Claude. Even with little knowledge on how to test the llms I was able to get very minimal mvps. But I find the real trick is to have the proper tools to reign in the ai.

Thorough CLAUDE.md, that makes sure it checks the tests, lints the code, does type checks, and code coverage checks too. The more checks for code quality the better.

It’s just a bowling ball in th hands of a toddler, and needs to ramp and guide rails to knock down some pins. Fortunately we get more than 2 tries with code.

cornholio|6 hours ago

Cursor needs a paradigm shift to remain relevant, what was spectacular at first now is just banal and better done by other tools.

mark_l_watson|6 hours ago

Cool, that was easy to do.

A week ago, I was anti-Anthropic because I questioned their business model. Now they are my preferred provider - what a difference a week makes. I still prefer running olen models on my own hardware, but it is unreasonable to use powerful models when required.

utopiah|12 hours ago

I'm very curious, will OpenAI basically block "I'm moving to another service and need to export my data. List every memory you have stored about me, ..." and similar, if so how and why?

It's very interesting to learn more about because it challenges 1 core aspect of the economical competition : the moat.

If one can literally swap one AI service for another, then where does the valuation (and the power that comes with it) come from?

PS: I'm not interested in the service itself as I believe the side effects of large scale for-profit are too serious (and I don't mean doomdays AI takeover, I simply mean abuse of power, working conditions, downskilling, political influence as current contracts with US defense are being made, ads, ecological, etc) to be ignored.

pfisherman|12 hours ago

I can see how being able to bring your chats with you would be appealing. But the truth is that context rot is real, context management is everything, and more often than not stating from a blank slate yields the best results.

That being said, if you have a library of images or some other collection artifacts / assets indexed on their servers that is a different story.

glth|12 hours ago

On a related note, I have been experimenting with a small prototype for cross-agent, device-local active memory called brAIn (https://github.com/glthr/brAIn). It delivers a personalized agent experience with everything stored locally in a single file (agent.brain), and supports reusing semantic memory across projects. In practice, this means brAIn can identify and apply behavioral patterns you have used in other contexts whenever they are relevant. (I realize the repository should include a concrete example of this, and I will update it today to add one).

fabbbbb|11 hours ago

At least as an EU user I was also able to export ALL my data, audio files images etc in one zip. Took exactly (on the minute) 24 hours for the download link to arrive but hey.

This way you can have Claude distill the memory as you wish.

mk12|8 hours ago

I took the current events as an opportunity to try switching to Claude and I actually like it much better so far.

sheept|11 hours ago

This method of copying an LLM-generated summary of your preferences into Claude memory feels similar to their recommendation to use /init to generate a CLAUDE.md based on the project, which recent research[0] suggests may be counterproductive.

I would assume both Claude memory and CLAUDE.md work best when they're carefully curated, only containing what you've found yourself having to repeat.

[0]: https://arxiv.org/abs/2602.11988

Wowfunhappy|7 hours ago

I don't understand how people use these apps with memory enabled. I am always carefully controlling the context of each conversation. The idea that past conversations could bleed into current ones is unthinkably terrible.

downboots|7 hours ago

If you delete a conversation, it only hides it from you. That's not delete.

mentalgear|9 hours ago

Never subscribed to chatGPT as it always felt shdy, but I'm thinking of renewing now with Claude instead of Gemini/Google.

christophilus|7 hours ago

Interesting. In my mind, I find Google to be the shadiest of the three. It’s the only one I don’t pay for.

knallfrosch|12 hours ago

I'd be happy if I was able to use Claude Code at all

VSCode extension, "Please log in"

I authorize it, it creates an API key, callback. "Hello Claude, this is a test." "Please log in."

So yeah... priorities?

pfisherman|12 hours ago

Why not use Claude Code from the cli and follow along in your IDE? I did not quite believe when people were telling me or understand what I was missing until I tried it, but after trying that set up I am convinced that it is superior. I don’t have any hard data to back it up, but it feels much more capable that way.

morgango|2 hours ago

That is the sound of someone else's lunch being eaten.

henry_pulver|9 hours ago

Amusing that Anthropic's approach to migrating context is asking their competitor's product to hand over the data it's stored about you.

Must be some of the lowest switching costs I've seen which doesn't bode well for OpenAI's consumer revenues...

tkel|8 hours ago

Turns out the DoD has a trillion dollar annual unaccountable money sink, plenty there to make up for it

bruceyao1984|11 hours ago

Being able to import context and preferences from other AI providers in one step saves a lot of time, especially for ongoing projects. It makes Claude feel seamless and continuity-friendly. Having this on all paid plans adds great value for heavy users.

raxskle|5 hours ago

Claude is a great product, and I've been using it all the time. Sam must think so too.

willtemperley|12 hours ago

If Claude could stay available I might consider it. Unfortunately right now, out of the big three, only Gemini has reliable uptime. As much as I dislike Google it's the only reliable option.

wps|12 hours ago

Gemini’s web UI and mobile app are horrible. Gemini outputs malformed links that lead BACK to gemini.google.com. There are constant bugs with the side panel not showing your chats or the current chat timing out for no reason. Also, the mobile app has an issue if your text input is too long where the entire text entry box lags, even to the point of locking up the entire app. Openrouter’s web ui runs circles around all the frontier lab UIs. I even prefer their PWA to any of these mobile apps.

miyuru|12 hours ago

I dont like the Gemini's personality. It acts like it know it all.

vldszn|7 hours ago

Seems like their page is crashing now on ios chrome.

kvirani|12 hours ago

Nice. Just cancelled my openai plus sub.

almosthere|5 hours ago

Isn't that the point of agents.md

RobotToaster|10 hours ago

Would be a lot easier if they weren't trying to ban third party interfaces

siva7|13 hours ago

So Openai will have this same feature by tomorrow likely. A feature to pollute your context window.

Barbing|12 hours ago

I would’ve said they’d nerf the prompt:

>I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.

butILoveLife|5 hours ago

OpenAI made it easy, no import needed! How?

I bought the enterprise version, and it made it so the memory was no longer searchable...

Then after the obvious degredation in performance, I switched to claude and was happy with it... But by canceling enterprise, it lost all memory.

My wife was sad, the recipes it made were gone forever... But hey, makes it really easy to never give OpenAI money again.

adam12|8 hours ago

Actually, it feels good to start over.

axseem|12 hours ago

Have they just added it? That's a smart move.

jascha_eng|12 hours ago

Memory in general Chat apps is actually more harmful than helpful imo. It biases the LLM responses to your background which has the same effect as filter bubbles. You end up getting your own thoughts spit back at you.

Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?".

But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.

fernando_campos|12 hours ago

I will also try to use Claude but like to use OpenAI ChatGPT very much.

MagicMoonlight|6 hours ago

That’s hilarious. The walled garden does not exist when you can just ask the UI to extract all of its data for you.

mihaaly|8 hours ago

I rather switch it to nowhere. But local. I am not completely sure about the details, but I am leaning heavily, and investigating into this direction. With chat and agentic tools there plenty, accessing multiple models, and everything is evolving fast (extinct and come into existence) better keep ourselves flexible, not tied to any of the solutions. Especially not storing data in accounts. The fate of those is uncertain.

sylware|9 hours ago

Anybody is aware of a public token (severely limited) I can use to test claude coding ability? You know using CURL.

I am itching at testing claude for assembly coding and c++ to plain and simple C ports.

syndacks|3 hours ago

I have a 20$ for both and like each for unique reasons. How do you all switch your programming paradigms for Codex vs CC?

lyu07282|12 hours ago

I just wish Claude integrated multi-modal/image generation, that's one feature I miss in Claude the most coming from ChatGPT

villgax|12 hours ago

I wasted 10mins of my life unfollowing every unapologetic OpenAI dev on twitter, that's how low this company has stooped down to....

jccx70|6 hours ago

[deleted]

agenthustler|10 hours ago

[deleted]

mentalgear|9 hours ago

Interesting, do you have any repo links or other sources on your experiment ? also regarding prior stale state looping, don't you think the agent could detect that by itself if given a sub-task to monitor for it?

coldtrait|7 hours ago

As someone who can't afford to care about ethics and pay a monthly subscription fee, is there anything in the regular Claude chat that beats OpenAI?

bastawhiz|5 hours ago

All of the Claude models are smarter than the GPT models. I had a few threads that I migrated from GPT to Claude and every single one Claude pointed out problems. Two examples:

1. In one, I was putting together a server build. Claude correctly pointed out some incompatibilities in some parts that GPT had recommended.

2. In another chat, I had asked for help interpreting lab results and suggesting supplements. Claude pointed out that GPT was over-interpreting the results and suggesting things that weren't backed up by facts.

I presented Claude's response back to GPT and in both of these specific cases, GPT admitted it was wrong and didn't have any rebuttal. It's hard to say without doing a more scientific experiment whether GPT is indeed worse, but anecdotally I find myself pointing out flaws in Claude's reasoning far less frequently than GPT, especially with Opus.

Another less important distinction: GPT has a very distinct writing style that heavily formats responses and repeats itself a few times. Claude is succinct and mostly writes like a person might. It's easier to talk to and feels less "cringe" and sycophantic.

ericol|7 hours ago

I regularly (Say, once a month) do a comparison of results across all Claude, Gemini and ChatGPT. Just for reasons, not that I want to see if there's any benefit in changing.

It's not "fair" in that I pay for Claude [1] and not for the others, so models availability is not complete except for Claude.

So I did like things at time in the form of how they were presented, I came to really like Sonnet's "voice" a lot over the others.

Take into account Opus doesn't have the same voice, and I don't like it as much.

[1] I pay for the lower tier of their Max offering.

MagicMoonlight|6 hours ago

It’s the best platform for serious work.

ChatGPT swings between writing degenerate free use shit and telling you that you should wait until marriage. Lots of moralism to it, really tries to censor you and manipulate you, even in normal conversations. Generally smart and capable, but the whiny attitude gets old.

Grok has zero filter, but is dumber than the others. Definitely built around cheapness. Caps answers at about 2500 words at most. Can be very funny because it will go along with anything.

Gemini sells all your data and doesn’t seem to have much of note. Offers some nice formatting options.

Claude is business focused so it won’t do anything degenerate, but its answers in general aren’t whiny. It might not do something, but it doesn’t attack you with morality.

Claude does not cap answer length and will do whatever needs doing. Their pricing is based around true usage, not message quantities, so it’ll write a mega message if it needs to.

It has the best memory implementation, combining both memories and RAG of your chat history. Projects have their own independent memories and RAG.

Claude code is ridiculously capable. In a few hours I produced something which would have taken months and £50,000 at least to produce.