top | item 46872109

Show HN: Sandy – Accelerate AI agents: think once, replay forever

2 points| sangkwun | 27 days ago |github.com

I gave my AI agent a Sandevistan.

What's the point of browser automation that's slower than a human?

Watching it pause for seconds before every click drove me crazy.

We build muscle memory for repetitive tasks—why can't AI?

Traditional Agent

Observe → LLM reasoning (slow) → Action → Observe → LLM... repeat Stops to think before every click.

Sandy

1. First run: LLM figures out the workflow → save as scenario

2. After: Replay the scenario (no LLM calls) Once you've blazed the trail, just follow the path.

LLM only helps find the path once. After that, Sandy replays the saved scenario—

so you get both speed and cost savings.

Demo (searching and playing a video on YouTube):

- Left: Traditional agent

- Right: Sandy (real-time, not sped up)

https://www.youtube.com/watch?v=nSKs8sy7o2c

Useful for:

- E2E test automation

- Regression tests (deterministic execution)

- Multi-tool workflows (GitHub issue → Slack notification, etc.)

Works with any MCP server, so you can chain browser automation + API calls

in a single scenario.

GitHub: https://github.com/Sangkwun/sandy

Honest limitations:

- UI changes break scenarios (need re-recording)

- Better for repetitive workflows than dynamic exploration

Questions and feedback welcome. PRs and stars too!

discuss

order

No comments yet.