vineroute// field manual
Rider
Driver
Dispatch
Systems
Systems/Realtime
03 / 06

Realtime

SSE pub/sub, no Redis, no socket framing

SSEPub/SubHono

Overview

When a driver pings their position via `POST /api/driver/ping`, the API writes a LiveVehicle row to a ring buffer and emits a `vehicle` event on the bus. Subscribers — connected via `GET /sse/vehicles` — receive the event as a JSON line. The connection stays open; clients reconnect automatically on drop.

How it works

1

Pub/sub primitive: a single EventEmitter from Node, exported as `bus` from `apps/api/src/lib/bus.ts`. Topics are channel names — vehicles, announcements, reservations. Subscribers call `bus.on('vehicles', handler)`.

2

The driver ping handler: `bus.emit('vehicles', vehicle)` is the last line. The event is fire-and-forget; subscribers handle their own backpressure.

3

`GET /sse/vehicles` uses Hono's streamSSE helper. It writes a JSON line per event, prefixed with `data: ` and terminated with a double newline per the SSE spec.

4

Client subscribes via EventSource (web) or react-native-event-source (mobile). Each parses the JSON, filters by assignmentId if scoped, and updates UI.

5

Reconnection: SSE's spec mandates automatic reconnect with the Last-Event-ID header. We use the server-generated event id as a monotonic counter and the client can resume from there (we don't persist the buffer across server restarts yet).

6

Backpressure: if a subscriber is slow, the stream Writer will buffer. We cap the buffer at 64KB per stream and drop the slowest reader if it overflows — better to disconnect than to OOM the API.

Key decisions

In-process pub/sub, not Redis

Redis pub/sub is the textbook answer. At our scale — one API instance, dozens of vehicles, a few hundred riders — it's overkill. The in-process EventEmitter is one less moving part. When we shard to multiple API instances, we'll swap the bus implementation; the rest of the code doesn't change.

SSE over WebSockets

WebSockets are bidirectional. Vehicle positions are one-way. SSE rides on HTTP, supports automatic reconnect natively, requires zero protocol-framing knowledge on the client, and works through every CDN we'd care to put in front.

Ring buffer for replay, not a database

Persisting every position to Postgres would let us replay forever but burns IO for data we don't read. The ring buffer keeps the last hundred positions per assignment — enough for a client that just connected to render the trail without a query.

PreviousDatabaseNextRoutes & stops