Store processed event IDs in a deduplification table. Use a database transaction to atomically attempt inserting the event ID and execute the handler — if the insert is skipped due to a duplicate key, the handler is not called. This ensures each event is processed exactly once even with at-least-once delivery semantics.
Every event must carry a unique eventId — generate a UUID on the producer side before publishing.
The deduplication insert and the handler run in the same database transaction — atomic together.
orIgnore() (or ON CONFLICT DO NOTHING in SQL) makes the insert a no-op for duplicate event IDs.
Clean up old processed events periodically — retain them only long enough to cover maximum redelivery windows.
Idempotency is essential for Kafka (at-least-once), RabbitMQ with requeuing, and any retry mechanism.