(no title)
bcherny | 18 days ago
One of the hard things about building a product on an LLM is that the model frequently changes underneath you. Since we introduced Claude Code almost a year ago, Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools. This is one of the magical things about building on models, and also one of the things that makes it very hard. There's always a feeling that the model is outpacing what any given product is able to offer (ie. product overhang). We try very hard to keep up, and to deliver a UX that lets people experience the model in a way that is raw and low level, and maximally useful at the same time.
In particular, as agent trajectories get longer, the average conversation has more and more tool calls. When we released Claude Code, Sonnet 3.5 was able to run unattended for less than 30 seconds at a time before going off the rails; now, Opus 4.6 1-shots much of my code, often running for minutes, hours, and days at a time.
The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow. We want to make sure every user has a good experience, no matter what terminal they are using. This is important to us, because we want Claude Code to work everywhere, on any terminal, any OS, any environment.
Users give the model a prompt, and don't want to drown in a sea of log output in order to pick out what matters: specific tool calls, file edits, and so on, depending on the use case. From a design POV, this is a balance: we want to show you the most relevant information, while giving you a way to see more details when useful (ie. progressive disclosure). Over time, as the model continues to get more capable -- so trajectories become more correct on average -- and as conversations become even longer, we need to manage the amount of information we present in the default view to keep it from feeling overwhelming.
When we started Claude Code, it was just a few of us using it. Now, a large number of engineers rely on Claude Code to get their work done every day. We can no longer design for ourselves, and we rely heavily on community feedback to co-design the right experience. We cannot build the right things without that feedback. Yoshi rightly called out that often this iteration happens in the open. In this case in particular, we approached it intentionally, and dogfooded it internally for over a month to get the UX just right before releasing it; this resulted in an experience that most users preferred.
But we missed the mark for a subset of our users. To improve it, I went back and forth in the issue to understand what issues people were hitting with the new design, and shipped multiple rounds of changes to arrive at a good UX. We've built in the open in this way before, eg. when we iterated on the spinner UX, the todos tool UX, and for many other areas. We always want to hear from users so that we can make the product better.
The specific remaining issue Yoshi called out is reasonable. PR incoming in the next release to improve subagent output (I should have responded to the issue earlier, that's my miss).
Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.
steinnes|18 days ago
bcherny|18 days ago
To try it: /config > verbose, or --verbose.
Please keep the feedback coming. If there is anything else we can do to adjust verbose mode to do what you want, I'd love to hear.
verelo|18 days ago
4gotunameagain|18 days ago
shhh don't say that, they will never fix it if means you use less tokens.
espeed|18 days ago
stingraycharles|18 days ago
I understand that I’m probably not the target audience if I want to actually step in and correct course, but it’s annoying.
ctoth|18 days ago
Sighted users lost convenience. I lost the ability to trust the tool. There is no "glancing" at terminal output with a screen reader. There is no "progressive disclosure." The text is either spoken to me or it doesn't exist.
When you collapse file paths into "Read 3 files," I have no way to know what the agent is doing with my codebase without switching to verbose mode, which then dumps subagent transcripts, thinking traces, and full file contents into my audio stream. A sighted user can visually skip past that. I listen to every line sequentially.
You've created a situation where my options are "no information" or "all information." The middle ground that existed before, inline file paths and search patterns, was the accessible one.
This is not a power user preference. This is a basic accessibility regression. The fix is what everyone in this thread has been asking for: a BASIC BLOODY config flag to show file paths and search patterns inline. Not verbose mode surgery. A boolean.
Please just add the option.
And yes, I rewrote this with Claude to tone my anger and frustration down about 15 clicks from how I actually feel.
duncangh|17 days ago
unknown|18 days ago
[deleted]
wahnfrieden|18 days ago
bcherny|18 days ago
deaux|18 days ago
> Yoshi and others -- please keep the feedback coming. We want to hear it, and we genuinely want to improve the product in a way that gives great defaults for the majority of users, while being extremely hackable and customizable for everyone else.
I think an issue with 2550 upvotes, more than 4 times of the second-highest, is very clear feedback about your defaults and/or making it customizable.
lgessler|18 days ago
bakugo|18 days ago
Are you actually wondering, or just hoping to hear a confirmation of what you already know? Because the reason behind it is pretty clear, it doubles as both vendor lock-in and advertisement.
patcon|18 days ago
jarjoura|18 days ago
> The amount of output this generates can quickly become overwhelming in a terminal
If I use Opus 4.6, arguably the most verbose, over thinking model you've released to date, OpenCode handles it just the same as it does Sonnet 4.0.
OpenCode even allows me to toggle into subagent and task agents with their own output terminals that, if I am curious what is going on, I can very clearly see it.
All Claude-Code has done has turned the output into a black box so that I am forced to wait for it to finish to look at the final git diff. By then it's spent $5-10 working on a task, and threw away a lot of the context it took to get there. It showed "thinking" blocks that weren't particularly actionable, because it was mostly talking to itself that it can't do something because it goes against a rule, but it really wants to.
I'm actually frustrated with Code blazing through to the end without me able to see the transcript of the changes.
eoncode|18 days ago
Funnily enough, both independently sided with the users, not the authors.
The core problem: --verbose was repurposed instead of adding a new toggle. Users who relied on verbose for debugging (thinking, hooks, subagent output) now have broken workflows - to fix a UX decision that shouldn't have shipped as default in the first place.
What should have been done:
A simple separate toggle would've solved everything without breaking anyone's workflow.Opus 4.6's parting thought: if you're building a developer tool powered by an AI that can reason about software design, maybe run your UX changes past it before shipping.
To be fair, your response explains the design philosophy well - longer trajectories, progressive disclosure, terminal constraints. All valid. But it still doesn't address the core point: why repurpose --verbose instead of adding a separate toggle? You can agree with the goal and still say the execution broke existing workflows.
sdoering|18 days ago
But this one isn't? I'd call myself a professional. I use with tons of files across a wide range of projects and types of work.
To me file paths were an important aspect of understanding context of the work and of the context CC was gaining.
Now? It feels like running on a foggy street, never sure when the corner will come and I'll hit a fence or house.
Why not introduce a toggle? I'd happily add that to my alisases.
Edit: I forgot. I don't need better subagent output. Or even less output whrn watching thinking traces. I am happy to have full verbosity. There are cases where it's an important aspect.
bcherny|18 days ago
More details here: https://news.ycombinator.com/item?id=46982177
Grimblewald|17 days ago
tartoran|17 days ago
Did you ever think that this may be Anthropic's goal? It is a waste for sure but it increases their revenue. Later on the old feature you were used to may resurface at a different tier so you'd have to pay up to get it.
jatora|17 days ago
SPICLK2|18 days ago
https://martin.ankerl.com/2007/09/01/comprehensive-linux-ter...
Could the React rendering stack be optimised instead?
sigmarule|18 days ago
Aeolun|18 days ago
I just find that very hard to believe. Does anyone actually do anything with the output now? Or are they just crossing their fingers and hoping for the best?
bcherny|18 days ago
gwern|18 days ago
If you are serious about this, I think there are so many ways you could clean up, simplify, and calm the Claude Code terminal experience already.
I am not a CC user, but an enthusiastic CC user generously spent an hour or two last week or so showing me how it worked and walking through an non-publicly-implemented Gwern.net frontend feature (some CSS/JS styling of poetry for mobile devices).
It was highly educational and interesting, and Claude got most of the way to something usable.
Yet I was shocked and appalled by the CC UI/UX itself: it felt like the fetal alcohol syndrome lovechild of a Las Vegas slot machine and Tiktok. I did not realize that all those jokes about how using CC was like 'crack' or 'ADHD' or 'gambling' were so on point, I thought they were more, well, metaphorical about the process as a whole. I have not used such a gross and distracting UI in... a long time. Everything was dancing and bouncing around and distracting me while telling me nothing. I wasted time staring at the update monitor trying to understand if "Prognosticating..." was different from "Fleeblegurbigating..." from "Reticulating splines...", while the asterisk bounces up and down, or the colored text fades in and out, all simultaneously, and most of the screen was wasted, and the whole thing took pains to put in as much fancy TUI nonsense as it could. An absolute waste, not whimsy, of pixels. (And I was a little concerned how much time we spent zoned out waiting on the whole shabang. I could feel the productivity leaving my body, minute by minute. How could I possibly focus on anything else while my little friendly bouncing asterisk might finish at any instant...?!) Some description of what files are being accessed seems like you could spare the pixels for them.
So I was impressed enough with the functionality to move it up my list, but also much of it made me think I should look into GPT Codex instead. It sounds like the interfaces there respect my time and attention more, rather than treating me like a Zoomer.
gwern|17 days ago
patcon|18 days ago
Please revert this
walt_grata|18 days ago
kache_|18 days ago
ivanb|18 days ago
subscribed|14 days ago
gigatexal|18 days ago
lysace|18 days ago
That's why I use your excellent VS Code extension. I have lots of screen space and it's trivial to scroll back there, if needed.
I would really like even more love given to this. When working with long-lived code bases it's important to understand what is happening. Lots of promising UX opportunities here. I see hints of this, but it seems like 80% is TBD.
Ideally you would open source the extension to really use the creativity of your developer user base. ;)
_heimdall|17 days ago
It might be worth considering a "verbose level" type setting with a selection of levels that describe the level of verbosity. Effectively, use a select menu instead of a boolean when one boolean state is actually multiple nested states.
Edit: I realised my use of "verbose" and "verbosity" here is it self ironically verbose, sorry!
jpcompartir|18 days ago
As others have said - 'reading 10 files' is useless information - we want to be able to see at a glance where it is and what it's doing, so that we can re-direct if necessary.
With the release of Cowork, couldn't Claude Code double down on needs of engineers?
giancarlostoro|17 days ago
Wowfunhappy|16 days ago
bird0861|16 days ago
keremk|17 days ago
jiggawatts|18 days ago
Ooo... ooo! I know what this is a reference to!
https://www.youtube.com/watch?v=hxM8QmyZXtg
jinus7949|18 days ago
https://github.com/anthropics/claude-code/issues/19673
rcbdev|18 days ago
rednafi|17 days ago
gchamonlive|18 days ago
If that's the case, it's important to asses wether it'll be consistent when operating on a higher level, less dependent on the software layer that governs the agent. Otherwise it'll risk Claude also becoming more erratic.
cyanydeez|18 days ago
unknown|17 days ago
[deleted]
nullbio|13 days ago
pvalue005|18 days ago
unknown|18 days ago
[deleted]
sh34r|18 days ago
Of course all the logs can’t be streamed to a terminal. Why would they need to be? Every logging system out there allows multiple stream handlers with different configurations.
Do whatever reasonable defaults you think make sense for the TUI (with some basic configuration). But then I should also be able to give Claude-code a file descriptor and a different set of config optios, and you can stream all the logs there. Then I can vibe-code whatever view filter I want on top of that, or heck, have a SLM sub-agent filter it all for me.
I could do this myself with some proxy / packet capture nonsense, but then you’d just move fast and break my things again.
I’m also constantly frustrated by the fancier models making wrong assumptions in brownfield projects and creating a big mess instead of asking me follow-up questions. Opus is like the world’s shittiest intern… I think a lot of that is upstream of you, but certainly not all of it. There could be a config option to vary the system prompt to encourage more elicitation.
I love the product you’ve built, so all due respect there, but I also know the stench of enshittification when I smell it. You’re programmers, you know how logging is supposed to work. You know MCP has provided a lot of these basic primitives and they’re deliberately absent from claude code. We’ve all seen a product get ratfucked internally by a product manager who copied the playbook of how Prabhakar Raghavan ruined google search.
The open source community is behind at the moment, but they’ll catch up fast. Open always beats closed in the long run. Just look at OpenAI’s fall into disgrace.
troupo|17 days ago
This is verifiable bullshit. Unless you explicitly explain how it "runs for days" since Opus's context window is incapable of handling even relatively large CLAUDE.md files.
> The amount of output this generates can quickly become overwhelming in a terminal, and is something we hear often from users. Terminals give us relatively few pixels to play with; they have a single font size; colors are not uniformly supported; in some terminal emulators, rendering is extremely slow.
No. It's your incapability as an engineer that limits this. And you and your engineers getting high on your own supply. Hence you need 16ms to draw a couple of characters on screen and call it a tiny game engine [1] For which your team was rightfully ridiculed.
> But we missed the mark for a subset of our users. To improve it,
AI-written corporate nothingspeak.
[1] https://x.com/trq212/status/2014051501786931427
indemnity|18 days ago
mnky9800n|18 days ago
adastra22|18 days ago
2001zhaozhao|18 days ago
Maybe "AI IDEs" will gain ground in the future, e.g. vibe-kanban
troupo|17 days ago
Unfortunately, vibe coders cannot do that anymore.
bibabaloo|18 days ago
It doesn't compose with any other command line program and the terminal interface is limiting.
I'm surprised nobody has yet made a coding assistant that runs in the browser or as a standalone app. At this point it doesn't really need to integrate with my text editor or IDE.
metaltyphoon|18 days ago
ftchd|18 days ago
benn67|18 days ago
I subscribe to max rn. Tons of money. Anthropic’s Super Bowl ads were shit, not letting us use open code was shit, and this is more shit. Might only be a single straw left before I go to codex (no one’s complaining about it. And the openclaw creator prefers it)
This dev is clearly writing his reply with Claude and sounding way too corpo. This feels like how school teachers would talk to you. Your response in its length was genuinely insulting. Everyone knows how to generate text with AI now and you’re doing a terrible job at it. You can even see the emdash attempt (markdown renders two normal dashes as an emdash).
This was his prompt “read this blog post, familiarize yourself with the mentioned GitHub issue and make a response on behalf of Anthropic.” He then added a little bit at the end when he realized the response didn’t answer the question and got so to fix the grammar and spelling on that.
Your response is appropriate for the masses. But we’re not. We’re the so called hackers and read right through the bs. It’s not even about the feature being gone anymore.
There is a principle we uphold as “hackers” that doesn’t align with this that pisses people off a lot more than you think. I can’t really put my finger on it maybe someone can help me out.
PS About the Super Bowl ads. Anyone that knows the story knows they’re exaggerated. (In the general public outside of Silicon Valley it’s like a 50/50 split or something about people liking or disliking AI as a whole rn. OpenAI is doing way more to help the case (not saying ads are a good thing). ) Open ai used to feel like the bad guy now it’s kinda shifting to anthropic. This, the ads and open code are all examples of it. (I especially recommend people watch the anthropic and open ai Super Bowl ads back to back)
retsibsi|18 days ago
> You can even see the emdash attempt (markdown renders two normal dashes as an emdash)
He says he wrote it all manually.[0] Obviously I can't know if that's true, but I do think your internal AI detector is at least overconfident. For example, some of us have been regularly using the double hyphen since long before the LLM era. (In Word, it auto-corrects to an en dash, or to an em dash if it's not surrounded by spaces. In plain text, it's the best looking easily-typable alternative to a dash. AFAICT, it's not actually used for dashes in CommonMark Markdown.)
The rest is more subjective, but there are some things Claude would be unlikely to write (like the parenthetical "(ie. progressive disclosure)" -- it would write "i.e." with both dots, and it would probably follow it with a comma). Of course those could all be intentional obfuscations or minimal human edits, but IMO you are conflating the corporate communications vibe with the LLM vibe.
[0] https://news.ycombinator.com/item?id=46982418
andai|17 days ago
And stop banning 3rd party harnesses please. Thanks
Anthropic, your actual moat is goodwill. Remember that.
xigoi|17 days ago
You mean the company that DDoSed websites to train their model?
cactusplant7374|18 days ago
Edit: I can't post anymore today apparently because of dang. If you post a comment about a bad terminal at least tell us about the rendering issues.
bcherny|18 days ago
latchkey|18 days ago
kaizenb|18 days ago
Paodim|18 days ago
Paodim|18 days ago
1718627440|18 days ago
As someone who finds formal language a natural and better interface for controlling a computer, can you explain how and why you actually hate it? I mean not stuff like lack of discoverability, because you use a shell that lacks completion and documentation, that have been common for decades, I get those downsides, but why do you detest it in principle?
Jean-Papoulos|18 days ago
unknown|18 days ago
[deleted]
notanastronaut|17 days ago
unknown|18 days ago
[deleted]
exabrial|18 days ago
Can we please move the "Extended Thinking" icon back to the left side of claude desktop, near the research and web search icons? What used to be one click is now three.
eldarrr|18 days ago
tom_m|17 days ago
holoduke|18 days ago
everdev|18 days ago
[deleted]
imadierich|18 days ago
[deleted]
insanedeisgn|18 days ago
[deleted]
NateEag|17 days ago
I never thought I'd long for the days when people posted "$LLM says" comments, but at least those were honest.
Wowfunhappy|17 days ago
Focusing on programmers seems to have really worked for Anthropic. (And they do also have Claude Cowork).
nubg|18 days ago
use your own words!
i would rather read the prompt.
benn67|18 days ago
lombasihir|18 days ago
gambiter|18 days ago
How can that be true, when you're deliberately and repeatedly telling devs (the community you claim to listen to) that you know better than they do? They're telling you exactly what they want, and you're telling them, "Nah." That isn't listening. You understand that, right?
adriand|18 days ago
And it shouldn’t need to be said, but the words that appear on the screen are from an actual person with, you know, feelings.
puppymaster|18 days ago
fasbiner|18 days ago
Arrogant and clueless, not exactly who I want to give my money to when I know what enshitification is.
They have horrible instincts and are completely clueless. You need to move them away from a public-facing role. It honestly looks so bad, it looks so bad that it suggests nepotism and internal dysfunction to have such a poor response.
This is not the kind of mistake someone makes innocently, it's a window into a worldview that's made me switch to gemini and reactivate cursor as a backup because it's only going to get worse from here.
The problem is not the initial change (which you would rapidly realize was a big deal to a huge number of your users) but how high-handed and incompetent the initial response was. Nobody's saying they should be fired, but they've failed in public in a huge way and should step back for a long time.
latchkey|18 days ago
ares623|18 days ago
rmujica|18 days ago