This is kinda frustrating to read. The style is very busy, and it lacks a clear structure. It's basically an information dump without any acknowledgment of what's important or not. Big-O notation is provided for a lot of operations where you wouldn't really care about Big-O (in a system where calls to an LLM dominate, this is most operations). Big picture story about how Claude Code actually works, as in what happens when I type in a prompt (which I'm very much interested, given how much I use it) is lacking. Some diagrams are so nonsensical they become funny. Look at this: https://southbridge-research.notion.site/Prompt-Engineering-... In general, the prompt engineering page, which deserves maybe the most detailed treatment, is just a dump of prompts and LLM bullet point filler.
I don't want to be overly negative, but I think it's only fair given the author hasn't graced us with their own thoughts, instead offloading the actual writing to an LLM.
Claude Code with Sonnet 4 is so good I've stopped using Aider. This has been hugely productive. I've been able to write agents that Claude Code can spawn and call out to for other models, even.
Could you briefly explain your workflow? I use Zed’s agent mode and I don’t really understand how people are doing it purely through the CLI. How do you get a decent workflow where you can approve individual hunks? Aren’t you missing out on LSP help doing it in the CLI?
Have you been able to interface Claude Code with Gemini 2.5 Pro? I'm finding that Gemini 2.5 Pro is still better at solving certain problems and architecture and it would be great to be able to consult directly in CC.
No, it's completely useless, and puts the entire rest of the analysis in a bad light.
LLMs have next to no understanding of their own internal processes. There's a significant amount of research that demonstrates this. All explanations of an internal thought process in an LLM are completely reverse engineered to fit the final answer (interestingly, humans are also prone to this – seen especially in split brain experiments).
In addition, the degree to which the author must have prompted the LLM to get it to anthropomorphize this hard makes the rest of the project suspect. How many of the results are repeated human prompting until the author liked the results, and how many come from actual LLM intelligence/analysis skill?
It's sure phrased like one, but I'd be careful to attribute LLM thought process to what it says it's thinking. LLMs are experts at working backwards to justify why they came to an answer, even when it's entirely fabricated
also, i will say, (if we can trust the findings in these notes are relatively accurate of the real implementation) is a PERFECT example of the real level of complexity used in cutting edge configuration of using LLM... its not just some complex fancy prompt you give to a model in a chat window... there is so much important stuff happening behind the scenes... though i suppose the people who complain about LLMs hallucinating / screwing up havent tried claude code or any agentic work flows - or, it could be their architecture / code is so poorly written and poorly organized that even the LLM itself struggles to modify it properly
> or, it could be their architecture / code is so poorly written and poorly organized that even the LLM itself struggles to modify it properly
You wrote this like this is some rare occurrence, and not a description of a bulk of the production code that exists today, even at high level tech companies.
It sees everything it needs to in one pass, no extra reasoning or instruction tokens around things like MCP that abstract and create hops to simple understanding of where things are at.
There is something here about the native filesystem and tooling. & some type of insight into what agentic software engineering will look like - I mostly feel like an orchestrator, or validator in the terminal window next to Claude Code where i run tests/related things.
I was never a great terminal developer, I cant even type right - but Claude Code by far provides the best software engineering interface in there terms of LLM/agent UX.
interesting... the analysis finds that the MCP supports websockets as a transport... when there is big drama going on right now that anthropic said "they will never support that", folks hating SSE, and so on and so forth
thegeomaster|9 months ago
I don't want to be overly negative, but I think it's only fair given the author hasn't graced us with their own thoughts, instead offloading the actual writing to an LLM.
triyambakam|9 months ago
__mharrison__|9 months ago
cedws|9 months ago
scuff3d|9 months ago
shostack|9 months ago
rane|9 months ago
InGoldAndGreen|9 months ago
numeri|9 months ago
LLMs have next to no understanding of their own internal processes. There's a significant amount of research that demonstrates this. All explanations of an internal thought process in an LLM are completely reverse engineered to fit the final answer (interestingly, humans are also prone to this – seen especially in split brain experiments).
In addition, the degree to which the author must have prompted the LLM to get it to anthropomorphize this hard makes the rest of the project suspect. How many of the results are repeated human prompting until the author liked the results, and how many come from actual LLM intelligence/analysis skill?
mholm|9 months ago
demarq|9 months ago
It sounds a lot like like the Murderbot character in the AppleTV show!
MoonGhost|9 months ago
FireFox 113.0.2, how come?
vohk|9 months ago
fullstackchris|9 months ago
girvo|9 months ago
You wrote this like this is some rare occurrence, and not a description of a bulk of the production code that exists today, even at high level tech companies.
ramoz|9 months ago
It sees everything it needs to in one pass, no extra reasoning or instruction tokens around things like MCP that abstract and create hops to simple understanding of where things are at.
ramoz|9 months ago
I was never a great terminal developer, I cant even type right - but Claude Code by far provides the best software engineering interface in there terms of LLM/agent UX.
owebmaster|9 months ago
29athrowaway|9 months ago
It is good because it highlights the relevant aspects of the design and you can use this, plus some other resources, to replicate the idea.
fullstackchris|9 months ago
numeri|9 months ago
eric-burel|9 months ago
[deleted]
unknown|9 months ago
[deleted]
flipthefrog|9 months ago
sonu27|9 months ago
[deleted]
unknown|9 months ago
[deleted]