Show HN: Tusk Drift – Open-source tool for automating API tests
56 points| Marceltan | 3 months ago |github.com
How it works:
1. Records traces from live traffic (what gets captured)
2. Replays traces as API tests with mocked responses (how replay works)
3. Detects deviations between actual vs. expected output (what you get)
Unlike traditional mocking libraries, which require you to manually emulate how dependencies behave, Tusk Drift automatically records what these dependencies respond with based on actual user behavior and maintains recordings over time. The reason we built this is because of painful past experiences with brittle API test suites and regressions that would only be caught in prod.
Our SDK instruments your Node service, similar to OpenTelemetry. It captures all inbound requests and outbound calls like database queries, HTTP requests, and auth token generation. When Drift is triggered, it replays the inbound API call while intercepting outbound requests and serving them from recorded data. Drift’s tests are therefore idempotent, side-effect free, and fast (typically <100 ms per test). Think of it as a unit test but for your API.
Our Cloud platform does the following automatically:
- Updates the test suite of recorded traces to maintain freshness
- Matches relevant Drift tests to your PR’s changes when running tests in CI
- Surfaces unintended deviations, does root cause analysis, and suggests code fixes
We’re excited to see this use case finally unlocked. The release of Claude Sonnet 4.5 and similar coding models have made it possible to go from failing test to root cause reliably. Also, the ability to do accurate test matching and deviation classification means running a tool like this in CI no longer contributes to poor DevEx (imagine the time otherwise spent reviewing test results).
Limitations:
- You can specify PII redaction rules but there is no default mode for this at the moment. I recommend first enabling Drift on dev/staging, adding transforms (https://docs.usetusk.ai/api-tests/pii-redaction/basic-concep...), and monitoring for a week before enabling on prod.
- Expect a 1-2% throughput overhead. Transforms result in a 1.0% increase in tail latency when a small number of transforms are registered; its impact scales linearly with the number of transforms registered.
- Currently only supports Node backends. Python SDK is coming next.
- Instrumentation limited to the following packages (more to come): https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-fil...
Let me know if you have questions or feedback.
Demo repo: https://github.com/Use-Tusk/drift-node-demo
sg_gabriel|3 months ago
Also, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
Marceltan|3 months ago
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
vitorbaptistaa|3 months ago
Another useful thing would be if I could create the tests from saved requests exported from my browser's network tab. In this case your tool would work regardless of the backend language.
Marceltan|3 months ago
Currently, Drift is language specific. You'd need the SDK installed in your backend while recording tests. This is because Drift captures not just the HTTP request/response pairs, but also all underlying dependency calls (DB queries, Redis operations, etc.) to properly mock them during replay.
A use case we do support is refactors within the same language. You'd record traces in your current implementation, refactor your code, then replay those traces to catch regressions.
For cross-language rewrites or browser-exported requests, you might want to look at tools that focus purely on HTTP-level recording/replay like Postman Collections. Hope this helps!
scientism|3 months ago
vitorbaptistaa|3 months ago
Vcrpy is closer to an automock, where you create tests that hit external services, so vcrpy records them and replays for subsequent tests. You write the tests.
Here you don't write tests at all, just use the app. The tests are automatically created.
Similar ideas, but at a different layer.
Marceltan|3 months ago
chrisweekly|3 months ago
Marceltan|3 months ago
mass_ornament|3 months ago
Marceltan|3 months ago
zmj|3 months ago
Marceltan|3 months ago
imiric|3 months ago
Marceltan|3 months ago
We capture the actual DB queries, Redis cache hits, JWT generation, and not just the HTTP calls (like you would see with mitmproxy), which lets us replay the full request chain without needing a live database or cache. This way each test runs idempotently.
bilekas|3 months ago
Looks like a nice tool, will check it out later when I get a chance.