top | item 47175746

Ask HN: How do you handle duplicate side effects when jobs, workflows retry?

10 points| shineDaPoker | 2 days ago

Quick context: I'm building background job automation and keep hitting this pattern:

1. Job calls external API (Stripe, SendGrid, AWS) 2. API call succeeds 3. Job crashes before recording success 4. Job retries → calls API again → duplicate

Example: process refund, send email notification, crash. Retry does both again. Customer gets duplicate refund email (or worse, duplicate refund).

I see a few approaches:

Option A: Store processed IDs in database Problem: Race between "check DB" and "call API" can still duplicate

Option B: Use API idempotency keys (Stripe supports this) Problem: Not all APIs support it (legacy systems, third-party)

Option C: Build deduplication layer that checks external system first Problem: Extra latency, extra complexity

What do you do in production? Accept some duplicates? Only use APIs with idempotency? Something else?

(I built something for Option C, but trying to understand if this is actually a common-enough problem or if I'm over-engineering.)

11 comments

order

jnbridge|1 day ago

This is one of those problems that gets significantly harder when your system spans multiple runtimes or platforms.

A few patterns that have worked well in practice:

1. Idempotency keys at the API boundary — every side-effecting call gets a client-generated UUID, and the receiver deduplicates. Simple, but think carefully about the TTL of your dedup window.

2. Outbox pattern — instead of directly calling the external service, write the intent to a local "outbox" table in the same transaction as your state change. A separate process polls the outbox and delivers. Debezium + CDC makes this quite clean.

3. For cross-system workflows: treat the saga orchestrator as the single source of truth for step completion. Each step checks its completion status before executing, so steps must be idempotent OR the orchestrator tracks state.

In practice, designing for at-least-once delivery + idempotent receivers is more reliable than trying to achieve exactly-once through distributed coordination. Exactly-once across system boundaries is effectively a myth outside of systems that support two-phase commit (and even then it's fragile).

dakiol|2 hours ago

I think this suggestions are fine but none of them solve the problem here. The "client" (the service the OP owns) can be as atomic and transactional as one wants, but if the "server" (the 3rd party service being called by the "client") doesn't offer either a) idempotency or b) a retrieval mechanism for existing resources, then the "client" can't do anything about the original stated problem.

fernando_campos|13 hours ago

Retries become dangerous when workflows aren't designed to be idempotent from the beginning.

What helped us was treating every job execution as replayable and attaching a unique operation key instead of relying on execution state alone.

Otherwise retries silently create data inconsistencies that only appear much later.

moomoo11|2 days ago

You proxy those api calls yourself and have idempotency to cover you for those APIs that don’t have it. If you architect it right you won’t have more than a ms latency addition. You can avoid the race condition issues by using atomic records so if something else tries they’d see it’s in progress and exit.

shineDaPoker|2 days ago

This is exactly the approach I took. Proxy layer that: - Uses atomic records (fence tokens) to prevent concurrent execution - Checks external system first before retrying (the retrieval step) - Records result for future lookups

The atomic records part is critical - I learned the hard way that just checking a DB flag isn't enough (process can freeze between check and execute, lease expires, another process takes over, both execute).

How do you handle the case where: 1. Process acquires atomic lock 2. Calls external API successfully 3. Process freezes before releasing lock 4. Lock expires, new process acquires it 5. New process calls API again → duplicate

Do you just accept this edge case (rare but possible)? Or is there a mitigation I'm missing?

codebitdaily|2 days ago

Idempotency is the only sustainable answer here. Whether it's at the database level using unique constraints or implementing idempotency keys in your API headers, you have to design for the 'at-least-once' delivery reality. I usually implement a 'processed_requests' table that stores the unique ID of the job. Before the worker executes any side effect (like a payment or email), it checks if that ID exists. If it does, it skips the execution and returns the previous result. It adds a bit of latency, but it's much cheaper than dealing with double-billing or corrupted data

stephenr|2 days ago

I think the answer is probably like most things: it depends.

- If the external service supports idempotent operations, use that option.

- If the external service doesn't, but has a "retrieval" feature (i.e. lookup if the thing already exists, e.g fetch refunds on a given payment), use that first.

- If the system has neither, assess how critical it is to avoid duplicates.

shineDaPoker|2 days ago

This matches my thinking. The retrieval/lookup approach is exactly what I built - basically Option C with an observe-before-act pattern.

For APIs that support idempotency keys (Stripe, etc.), I use those. For ones that don't but have retrieval (most do), I check first before retrying.

The question I'm wrestling with: is the extra round-trip for the lookup worth it? Or should I just accept the edge cases where it duplicates?

What's your threshold for "critical enough to avoid duplicates"? Payments obviously yes, but what about notifications, reporting, analytics events?

babelfish|2 days ago

Use something like Temporal

shineDaPoker|2 days ago

I actually talked to someone in temporal about this recently. Temporal gives you the primitives to handle it (activities, configure retries, interceptors), but you still have to implement the deduplication logic yourself for each external integration.

His advice was: Temporal solves orchestration, but making the external API calls idempotent is on you. For simple cases, write observe activities manually. For complex cases, build abstraction.

That's what led me down this path - trying to figure out if the abstraction is worth building or if manual is good enough.

Have you used Temporal for this? How do you handle the idempotency of external calls?