(no title)
kromem | 4 months ago
That injection (for various reasons) will essentially eat up a massive amount of the model's attention budget and most of the extended thinking trace if present.
I haven't really seen lower quality of responses with modern Claudes with long context for the models themselves, but in the web/app with the LCR injections the conversation goes to shit very quickly.
And yeah, LCRs becoming part of the memory is one (of several) things that's probably going to bite Anthropic in the ass with the implementation here.
No comments yet.