(no title)
rocqua | 13 days ago
That conversation is held in text, not in any internal representation. That text is called the reasoning trace. You can then analyse that trace.
rocqua | 13 days ago
That conversation is held in text, not in any internal representation. That text is called the reasoning trace. You can then analyse that trace.
bandrami|13 days ago
ehsanu1|13 days ago
Reconstruction of reasoning from scratch can happen in some legacy APIs like the OpenAI chat completions API, which doesn't support passing reasoning blocks around. They specifically recommend folks to use their newer esponses API to improve both accuracy and latency (reusing existing reasoning).
tibbar|13 days ago
In this regard, the reasoning trace of an agent is trivially accessible to clients, unlike the reasoning trace of an individual LLM API call; it's a higher level of abstraction. Indeed, I implemented an agent just the other day which took advantage of this. The OP that you originally replied to was discussing an agentic coding process, not an individual LLM API call.