top | item 46736724

(no title)

jampa | 1 month ago

Slightly off topic, but does anyone feel that they nerfed Claude Opus?

It's screwing up even in very simple rebases. I got a bug where a value wasn't being retrieved correctly, and Claude's solution was to create an endpoint and use an HTTP GET from within the same back-end! Now it feels worse than Sonnet.

All the engineers I asked today have said the same thing. Something is not right.

discuss

order

eterm|1 month ago

That is a well recognised part of the LLM cycle.

A model or new model version X is released, everyone is really impressed.

3 months later, "Did they nerf X?"

It's been this way since the original chatGPT release.

The answer is typically no, it's just your expectations have risen. What was previously mind-blowing improvement is now expected, and any mis-steps feel amplified.

quentindanjou|1 month ago

This is not always true. LLMs do get nerfed, and quite regularly, usually because they discover that users are using them more than expected, because of user abuse or simply because it attract a larger user base. One of the recent nerfs is the Gemini context window, drastically reduced.

What we need is an open and independent way of testing LLMs and stricter regulation on the disclosure of a product change when it is paid under a subscription or prepaid plan.

jampa|1 month ago

I usually agree with this. But I am using the same workflows and skills that were a breeze for Claude, but are causing it to run in cycles and require intervention.

This is not the same thing as a "omg vibes are off", it's reproducible, I am using the same prompts and files, and getting way worse results than any other model.

mrguyorama|1 month ago

Also people who were lucky and had lots of success early on but then start to run into the actual problems of LLMs will experience that as "It was good and then it got worse" even when it didn't actually.

If LLMs have a 90% chance of working, there will be some who have only success and some who have only failure.

People are really failing to understand the probabilistic nature of all of this.

"You have a radically different experience with the same model" is perfectly possible with less than hundreds of thousands of interactions, even when you both interact in comparable ways.

olao99|1 month ago

Just because it's been true in the past doesn't mean it will always the case

spike021|1 month ago

Eh, I've definitely had issues where Claude can no longer easily do what it's previously done. That's with constant documenting things in appropriate markdown files well and resetting context here and there to keep confusion minimal.

F7F7F7|1 month ago

I don’t care what anyone says about the cycle or that implying that it’s all in our heads. It’s bad bad.

I’m a Max x20 model who had to stop using it this week. Opus was regularly failing on the most basic things.

I regularly use the front end skill to pass mockups and Opus was always pixel perfect. This last week it seemed like the skill had no effect.

I don’t think they are purposely nerfing it but they are definitely using us as guinea pigs. Quantized model? The next Sonnet? The next Haiku? New tokenizing strategies?

ryanar|1 month ago

I noticed that this week. I have a very straightforward claude command that lists exact steps to follow to fetch PR comments to bring them into the context window. Stuff like step one call gh pr view my/repo and it would call it with anthropiclabs/repo instead, it wouldn’t follow all the instructions, it wouldn’t pass the exact command I had written. I pointed out the mistake and it goes oh you are right! Then proceeded to make the same mistake again.

I used this command with sonnet 4.5 too and have never had a problem until this week. Something changed either in the harness or model. This is not just vibes. Workflows I have run hundreds of times have stopped working with Opus 4.5

kachapopopow|1 month ago

They're A/B testing on the latest opus model, sometimes it's good sometimes it's worse than sonnet annoying as hell. I think they trigger it when you have excessive usage or high context use.

hirako2000|1 month ago

Or maybe when usage is low so that we try again.

Or maybe when usage is high they tweak a setting that use cache when it shouldn't.

For all we know they do whatever experiment the want, to demonstrate theoretical better margin, to analyse user patterns when a performance drop occur.

Given what is done in other industries which don't face an existential issue, it wouldn't surprise me some whistle blowers in a few years tell us what's been going on.

root_axis|1 month ago

This has been said about every LLM product from every provider since ChatGPT4. I'm sure nerfing happens, but I think the more likely explanation is that humans have a tendency to find patterns in random noise.

measurablefunc|1 month ago

They are constantly trying to reduce costs which means they're constantly trying to distill & quantize the models to reduce the energy cost per request. The models are constantly being "nerfed", the reduction in quality is a direct result of seeking profitability. If they can charge you $200 but use only half the energy then they pocket the difference as their profit. Otherwise they are paying more to run their workloads than you are paying them which means every request loses them money. Nerfing is inevitable, the only question is how much it reduces response quality & what their customers are willing to put up with.

landl0rd|1 month ago

I've observed the same random foreign-language characters (I believe chinese or japanese?) interspersed without rhyme or reason that I've come to expect from low-quality, low-parameter-count models, even while using "opus 4.5".

An upcoming IPO increases pressure to make financials look prettier.

boringg|1 month ago

Ive seen this too and ignored it. Weird

epolanski|1 month ago

Not really.

In fact as my prompts and documents get better it seems it does increasingly better.

Still, it can't replace a human, I really need to correct it at all, and if I try to one shot a feature I always end up spending more time refactoring it few days later.

Still, it's a huge boost to productivity, but the time it can take over without detailed info and oversight is far away.