(no title)
tonyww | 1 month ago
We used it because it’s a dynamic, hostile UI, but the design goal is a site-agnostic control plane. That’s why the runtime avoids selectors and screenshots and instead operates on pruned semantic snapshots + verification gates.
If the layout changes, the system doesn’t “half-work” — it fails deterministically with artifacts. That’s the behavior we’re optimizing for.
tomhow|1 month ago
That's a prerequisite for Show HN.
I'm removing the Show HN prefix for now, until we get clarity. Then we can consider re-upping the post once we know exactly how to present it.
tonyww|1 month ago
ares623|1 month ago
how is this different than building a scraper script that does it traditionally?
tonyww|1 month ago
A traditional scraper/script hard-codes selectors and control flow up front. When the layout changes, it usually breaks at an arbitrary line and you debug it manually.
In this setup, the agent chooses actions at *runtime* from a bounded action space, and the system uses the built-in predicates (e.g. url_changes, drawer_appeared, etc) to verify the outcomes. When it fails, it fails at a specific semantic assertion with artifacts, not a missing selector.
So it’s less “replace scripts” and more “apply test-style verification and recovery to AI-driven decisions instead of static code.”
blibble|1 month ago