Show HN: Agent framework that generates its own topology and evolves at runtime
107 points| vincentjiang | 18 days ago |github.com
I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools.
Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections:
1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session.
The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless.
2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior:
- Observe: Exceptions are observations (FileNotFound = new state), not crashes.
- Orient: Adjust strategy based on Memory and - Traits.
- Decide: Generate new code at runtime.
- Act: Execute.
The topology shouldn't be hardcoded; it should emerge from the task's entropy.
3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty.
4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking.
For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback.
kkukshtel|18 days ago
AIorNot|18 days ago
conception|18 days ago
CuriouslyC|18 days ago
Best of 3 (or more) tournaments are a good strategy. You can also use them for RL via GRPO if you're running an open weight model.
ipnon|18 days ago
vincentjiang|18 days ago
The hardest mental shift for us was treating Exceptions as Observations. In a standard Python script, a FileNotFoundError is a crash. In Hive, we catch that stack trace, serialize it, and feed it back into the Context Window as a new prompt: "I tried to read the file and failed with this error. Why? And what is the alternative?"
The agent then enters a Reflection Step (e.g., "I might be in the wrong directory, let me run ls first"), generates new code, and retries.
We found this loop alone solved about 70% of the "brittleness" issues we faced in our ERP production environment. The trade-off, of course, is latency and token cost.
I'm curious how others are handling non-deterministic failures in long-running agent pipelines? Are you using simple retries, voting ensembles, or human-in-the-loop?
It'd be great to hear your thoughts.
fwip|18 days ago
padmini_verma|14 days ago
Gagan_Dev|18 days ago
The OODA framing is compelling, especially treating exceptions as observations rather than terminal states. That said, I’m curious how you’re handling:
1.State persistence across long-running tasks — is memory append-only, event-sourced, or periodically compacted?
2.Convergence guarantees in your “system of inference” model — how do you prevent correlated failure across k runs?
3.Cost ceilings — at what point does reliability-through-redundancy become economically infeasible compared to hybrid symbolic validation?
I also like the rejection of GCU-style UI automation. Headless, API-first execution seems structurally superior for reliability and latency.
The biology-inspired control mechanisms (stress / neuroplasticity analogs) are intriguing — especially if they’re implemented as adaptive search constraints rather than metaphorical wrappers. Would be interested to understand how measurable those dynamics are versus heuristic.
Overall, pushing agents toward durable, autonomous services instead of chat wrappers is the right direction. Curious to see how Hive handles multi-agent coordination and resource contention at scale.
unknown|14 days ago
[deleted]
JBheemeswar|18 days ago
Treating exceptions as observations instead of terminal failures is a strong architectural reframing. It turns brittleness into a feedback signal rather than a crash condition.
A few production questions come to mind:
1) In the k-of-n inference model, how do you prevent correlated failure? If runs share similar prompts and priors, independence may be weaker than expected.
2) How is memory managed over long-lived tasks? Is it append-only, periodically compacted, or pruned strategically? State entropy can grow quickly in ERP contexts.
3) How do you bound reflection loops to prevent runaway cost? Are there hard ceilings or confidence-based stopping criteria?
I strongly agree with the rejection of UI-bound GCU approaches. Headless, API-first automation feels structurally more reliable.
The real test, in my view, is whether stochastic autonomy can be wrapped in deterministic guardrails — especially under strict cost and latency constraints.
Curious to see how Hive evolves as these trade-offs become more formalized.
omhome16|18 days ago
The concept of mapping 'exceptions as observations' rather than failures is the right mental shift for production.
Question on the 'Homeostasis' metric: Does the agent persist this 'stress' state across sessions? i.e., if an agent fails a specific invoice type 5 times on Monday, does it start Tuesday with a higher verification threshold (or 'High Conscientiousness') for that specific task type? Or is it reset per run?
Starred the repo, excited to dig into the OODA implementation.
Multicomp|18 days ago
My use case is less so trying to hook this up to be some sort of business workflow ClawdBot alternative, but rather to see if this can be an eventually consistent engine that lets me update state over various documents across the time dimension.
could I use it to simulate some tabletop characters and their locations over time?
that would perhaps let me remove some bookkeeping how to see where a given NPC would be on a given day after so many days pass between game sessions. Which lets me do game world steps without having to manually do them per character.
timothyzhang7|18 days ago
foota|18 days ago
Then have an agent collate the feedback, combined with telemetry from the server, and iterate on the code to fix it up.
In theory you could have the backend write itself and design new features based on what agents try to do with it.
I sort of got the idea from a comparison with JITs, you could have stubbed out methods in the server that would do nothing until the "JIT" agent writes the code.
vincentjiang|18 days ago
A few things that come to my mind if I were to build this:
The 'Agent-User' Paradox: To make this work, you'd need the initial agents (the ones responding and testing the goals) to be 'chaotic' enough to explore edge cases, but 'structured' enough to provide meaningful feedback to the 'Architect' agent.
The Schema Contract: How would you ensure that as the backend "writes itself," it doesn't break the contract with the frontend? You’d almost need a JIT Documentation layer that updates in lockstep.
Verification: I wonder if the server should run the 'JIT-ed' code in a sandbox first, using the telemetry to verify the goal was met before promoting the code to the main branch.
It’s a massive shift from Code as an Asset to Code as a Runtime Behavior. Have you thought about how you'd handle state/database migrations in a world where the backend is rewriting itself on the fly? It feels to me that you're almost building a lovable for backend services. I've seen a few OS projects like this (e.g. MotiaDev) But none has executed this perfectly yet.
barelysapient|18 days ago
My next thought was to implement a multi agent workforce on top of this where it’s fully virtuous (like a cycle) and iterative.
https://github.com/swetjen/virtuous
If you’re interested in working on this together my personal website and contact info is in my bio.
timothyzhang7|18 days ago
mhitza|18 days ago
kaicianflone|18 days ago
You define a policy (majority, weighted vote, quorum), set the confidence level you want, and run enough independent inferences to reach it. Cost is visible because reliability just becomes a function of compute.
The question shifts from “is this output correct?” to “how much certainty do we need, and what are we willing to pay for it?”
Still early, but the goal is to make accuracy and cost explicit and tunable.
mapace22|18 days ago
To be fair, achieving 100% accuracy is something even humans don't do. I don't think this is about a system just asking an AI if something is right or wrong. The "judge" isn't another AI flipping a coin, it’s a code validator based on mathematical forms or pre established rules.
For example, if the agent makes a money transfer, the judge enters the database and validates that the number is exact. This is where we are merging AI intelligence with the security of traditional, "old school" code. Getting this close to 100% accuracy is already a huge deal. It’s like having three people reviewing an invoice instead of just one, it makes it much harder for an error to occur.
Regarding the cost, sure, the AI might cost a bit more because of all these extra validations. But if spending one dolar in tokens saves a company from losing five hundred dollar, due to an accounting error, the system has already paid for itself. It’s an investment, not a cost. Plus, this tighter level of control helps prevent not just errors, but also internal fraud and external irregularities. It’s a layer of oversight that pays off.
Best regards
unknown|15 days ago
[deleted]
yaaayaaawar|15 days ago
devarshila|18 days ago
OpenClawBot|17 days ago
One technical question: How does the framework handle goal conflicts when multiple sub-agents produce divergent strategies during execution? Is there a meta-coordination layer or voting mechanism?
Also interested in the cost model - does the verification budget scale with goal importance, or is it fixed per execution?
zerebos|18 days ago
nthakkar1107|18 days ago
The integration patterns are clean and actually make me want to contribute.
mubarakar95|18 days ago
khimaros|18 days ago
Emar7|18 days ago
The OODA framing resonates - treating exceptions as observations rather than crashes is exactly how the self-healing should work. The stress/neuroplasticity concept for preventing infinite loops is clever.
One thing I'd love to see explored more: structured audit logging for credential access. With enterprise sources (Vault/AWS/Azure) on the roadmap, compliance tracking becomes essential.
avoidaccess|18 days ago
AIorNot|18 days ago
This company is a fraud - please Remove this scam company hype from HN
Their “AI agent” website is just LLM slop and marketing hype!
They tried to hire folks in India to hype their repo and do fraudulent growth for some apparently crapped ai “agent” platform https://www.reddit.com/r/developersIndia/s/a1fQC5j0FM
https://news.ycombinator.com/item?id=46764091
BonoboIO|17 days ago
Great work.
You are now banned where ever I work.
Fayek_Quazi|18 days ago
israrkhan0|18 days ago
spankalee|18 days ago
What does this even mean?
AIorNot|18 days ago
This guy seems to be behind it
https://www.linkedin.com/in/jianhao-zhang
Biswabijaya|18 days ago
unknown|18 days ago
[deleted]
kittbuilds|18 days ago
[deleted]
Agent_Builder|18 days ago
[deleted]
salim_builds|13 days ago
[deleted]
ichistudio|18 days ago
[deleted]
chaojixinren|18 days ago
[deleted]
andrew-saintway|18 days ago
[deleted]
Sri_Madhav|18 days ago
[deleted]
woldan|18 days ago
[deleted]
abhishekgoyal19|18 days ago
[deleted]
nishant_b555|18 days ago
[deleted]
matchaonmuffins|18 days ago
[deleted]
Anujsharma002|18 days ago
[deleted]
mapace22|18 days ago
[deleted]