Sintropy Engine
https://github.com/LeonFelipeCordero/sintropy-engineSintropy Engine
Sintropy Engine is a message broker I built from scratch using Kotlin and Quarkus. It started from wanting to create a replication point system using event sourcing, and from there the idea just kept expanding. What began as an experiment grew into a full message broker with its own design decisions.
The broker supports two types of channels: Queues for persistent polling-based consumption, and Streams for real-time delivery over WebSockets. Queues can work in two modes, STANDARD where multiple consumers compete for messages (load balancing), and FIFO where messages are processed strictly in order. For streaming I use PostgreSQL logical replication with the wal2json plugin, which lets me push new messages to WebSocket consumers in real time directly from the database WAL.
One design decision I’m happy with is the circuit breaker working at the routing-key level instead of the whole channel. When a message fails in a FIFO queue, the circuit breaker opens only for that specific (channel, routing_key) pair and routes remaining messages to the dead letter queue, preventing out-of-order processing without blocking unrelated traffic. Most brokers do this at the channel level, but per-routing-key felt like the right granularity.
The DLQ is not just a dump for failed messages. Every message keeps its originMessageId so you can trace its history, and it supports single, batch, and bulk recovery back to the original channel. All of this is enforced through PostgreSQL triggers, keeping the logic close to the data.
Channels can be linked together for automatic message propagation, enabling fan-out and event-driven pipelines. The system validates compatibility between channels, so a standard unordered queue can’t link to a FIFO or stream path, which prevents data loss from mismatched ordering guarantees.
I chose PostgreSQL as the backbone instead of a specialized store. It keeps operational complexity low, and using logical replication with wal2json for streaming is optional and feature-flag gated, so the same codebase works in both single-instance and replicated setups. JOOQ gives me type-safe queries without a generic ORM.
Queue polling, FIFO ordering, and ensuring a single poller in FIFO mode are all achieved using pg_try_advisory_xact_lock. The advisory lock is acquired inside the query itself, so the locking, filtering, and status update all happen in a single atomic statement. For example, the FIFO polling query looks like this:
with result as (
select message_id
from messages
where channel_id = :channelId
and routing_key = :routingKey
and status = 'READY'
and not exists (
select 1 from messages
where channel_id = :channelId
and routing_key = :routingKey
and status = 'IN_FLIGHT')
and pg_try_advisory_xact_lock(:hash)
order by timestamp
limit :pollingCount
for update skip locked)
update messages
set status = 'IN_FLIGHT',
last_delivered = now(),
delivered_times = delivered_times + 1,
updated_at = now()
from result
where messages.message_id in (result.message_id)
returning messages.*;
The advisory lock guarantees only one consumer can poll a given FIFO path at a time, while for update skip locked handles concurrency for standard queues. Both strategies live in plain SQL without any application-level locking.
A big part of the design also relies heavily on PostgreSQL triggers. Message routing to linked channels, DLQ insertion on failure, circuit breaker activation, event logging — all of that happens inside the database in the same transaction as the original operation. This means the application doesn’t need extra roundtrips to the database to coordinate these side effects. Instead of the broker making multiple sequential calls for each message, the trigger chain handles it in one shot. This made a real difference during load testing, where the broker was able to handle more than 2,000 messages per second on a local network.
For setup, I added an Infrastructure as Code layer that reads a declarative JSON file to create channels, producers, and links on startup. It computes a hash of the config and detects drift automatically, reconciling what’s missing and removing what’s orphaned.
The whole process has been a lot of fun, every part of it from the replication layer to the circuit breakers to the IaC setup.
I keep iterating on it, exploring things like in-memory caching, optimizing FIFO throughput, and stream cursor tracking for consumer resume.