top | item 46674997

(no title)

qazxcvbnmlp | 1 month ago

My mental model is that ai coding tools are machines that can take a set of constraints and turn them into a piece of code. The better you get at having it give its self those constraints accurately, the higher level task you can focus on.

Eg compiler errors, unit tests, mcp, etc.

Ive heard of these; but havent tried them yet.

https://github.com/hmans/beans

https://github.com/steveyegge/gastown

Right now i spent a lot of “back pressure” on fitting the scope of the task into something that will fit in one context window (ie the useful computation, not the raw token count). I suspect we will see a large breakthrough when someone finally figures out a good system for having the llm do this.

discuss

order

AnonyX387|1 month ago

> Right now i spent a lot of “back pressure” on fitting the scope of the task into something that will fit in one context window (ie the useful computation, not the raw token count). I suspect we will see a large breakthrough when someone finally figures out a good system for having the llm do this.

I've found https://github.com/obra/superpowers very helpful for breaking the work up into logical chunks a subagent can handle.

jkhdigital|1 month ago

Still basically relies on feeding context through natural language instructions which can be ignored or poorly followed?

The answer is not more natural language guardrails, it is in (progressive) formal specification of workflows and acceptance criteria. The task cannot be marked as complete if it is only accessible through an API that rejects changes lacking proof that acceptance criteria were met.

nonethewiser|1 month ago

How would you compare it to Claude Code in planning mode?