Backpressure and Data Flow Control

FlowMaker routing is designed to self-regulate throughput across the whole graph.

Diagram: backpressure control loop

Backpressure control loop

The loop only advances when the downstream side acknowledges readiness, so sink latency directly regulates upstream emission speed.

Core mechanism

  1. awaitOutputAck() blocks until source data is available.
  2. Routing queue processes asynchronously with concurrency=1.
  3. sendInput() blocks on destination acknowledgement.

This naturally propagates downstream slowness upstream and prevents unbounded memory growth.

Why this matters in production

Industrial systems usually have uneven node speeds:

  • A PLC source can emit quickly.
  • A transformation pipe can process moderately.
  • A sink writing to an external historian/API can become temporarily slow.

Without backpressure, queues explode and memory usage drifts upward. In FlowMaker, slower sinks slow the full chain, keeping the system stable rather than over-buffered.

Runtime behavior (simplified)

while (running) {
  const reply = await source.awaitOutputAck(outputPort); // blocks
  if (reply.error || cancelled) break;

  queue.push({ payload: reply.data, header: reply.header });
}

// queue worker (concurrency=1)
await destination.sendInput(inputPort, item.payload, item.header); // blocks on ack

The two blocking operations are the control loop.

Interface-level understanding

interface FlowBoxSource {
  onOutputReady(
    outputId: Buffer,
    header: Buffer,
    fnReadyForNextItem: (data: Buffer, header: Buffer) => any,
  ): any;
}

interface FlowBoxSink {
  onInputReceived(
    inputId: Buffer,
    header: Buffer,
    data: Buffer,
    fnReadyForNextItem: () => any,
  ): any;
}

fnReadyForNextItem is the important pacing signal: a node should call it only when it is truly ready to continue.

[NOTE!pace/ACK DISCIPLINE] Calling fnReadyForNextItem before persistence, retries, or side effects are actually complete defeats backpressure and can mask data loss during incident recovery.

Real-life scenarios

Scenario A: Historian latency spike

A sink writing to a remote historian starts taking 1s per write (instead of 50ms). Backpressure ensures upstream pipes and sources slow down accordingly, avoiding runaway in-memory buffers.

Scenario B: Burst from event source

A source emits a short burst after reconnect. Queue + ack pacing smooths burst delivery at destination processing speed rather than flooding the sink.

Scenario C: External API rate-limited sink

If an HTTP sink is throttled by 429 responses and retries, its slower ack cadence reduces upstream pressure automatically.

Operational implications

  • Throughput is bounded by the slowest active branch.
  • Memory profile stays predictable under prolonged slowness.
  • Recovery is cleaner after transient sink degradation.
  • Alerting should focus on sustained ack latency and queue wait times.

[WARNING!monitoring/PERFORMANCE EXPECTATION] A throughput drop is often a correctness feature, not a bug: it indicates the control loop is protecting memory while downstream dependencies recover.

Runtime references

  • platform-backend/docs/runtime-v2.md section Data Routing and Backpressure
  • platform-backend/src/runner/run-node-connection.class.ts
  • sdk/typescript/src/flow-element.types.ts