LogBus Pipelines vs Dynamic Workflows

logbus

Two ways to run code you didn't write ahead of time

TL;DR

They look different, but they’re converging on the same idea:

A runtime for code that is generated on demand (often by humans, increasingly by agents)


The setup

Every few years, we reinvent how code runs:

And that last one is where things get interesting.

Two systems that sit on opposite ends of this spectrum:

At first glance, they don’t overlap much. One is orchestration. The other is observability.

But squint a bit, and they start to rhyme.


What Dynamic Workflows actually are

Cloudflare Dynamic Workflows are essentially:

“Run this program, reliably, even if it takes days.”

A workflow looks like:

Under the hood:

Think:

This is not just “run a function.” This is “run a story.”


What LogBus pipelines are

LogBus pipelines are closer to:

“Continuously interpret a firehose of reality.”

They:

Key properties:

Think:

If workflows are a story, pipelines are a stream of consciousness.


Where they unexpectedly overlap

1. User-defined code on shared infrastructure

Both systems exist because of the same uncomfortable truth:

You don’t control the code you need to run.

Same problem space:

Different clothes, same laundry.


2. Primitives are the product

Both systems win or lose based on what they expose.

Workflows primitives:

Pipeline primitives:

In both cases:

The platform is just a thin layer around a really good set of verbs.


3. Control plane vs execution plane

Both introduce indirection:

This split is what enables:


4. Suspiciously good fit for agents

This is where things stop being coincidental.

Modern LLMs are much better at:

So both systems become:

Workflows:

“Here’s a plan. Execute it over time.”

LogBus:

“Here’s how to interpret signals. Apply it continuously.”


Where they fundamentally differ

1. Finite vs infinite

WorkflowsPipelines
LifecycleStart → EndNever ends
ShapeSequence of stepsContinuous flow
TriggerEventAlways on

This is the big one.

Workflows finish. Pipelines exist.


2. Time: control vs data

Workflows treat time like a control mechanism:

Pipelines treat time like data:

One moves through time The other measures time


3. State philosophy

Workflows:

Pipelines:


4. Reliability model

Workflows:

Pipelines:


5. What they’re for

Workflows:

Pipelines:


The deeper connection

Here’s the part that’s worth remembering:

Workflows are programmable control flow over time. Pipelines are programmable data flow over signals.

Or, if you prefer a table:

DimensionWorkflowsPipelines
AxisTimeData
UnitWorkflow instanceEvent
ModelImperativeDataflow
LifetimeFiniteInfinite
GoalDo workExtract signal

The convergence (this is the fun part)

They’re moving toward each other.

Workflows → becoming dataflow-ish

Pipelines → becoming workflow-ish

Both are quietly evolving into:

execution substrates for agent-generated programs


A mildly spicy take

One answers:

“What should happen next?”

The other answers:

“What just happened, and does it matter?”


Why this matters (especially for you)

If you’re building something like LogBus:

You’re not “just” building a log pipeline.

You’re building:

A runtime where agents can continuously interpret the world.

And that’s a very different category than:

It’s closer to:


Closing thought

Dynamic Workflows and LogBus pipelines start from opposite ends:

But they’re converging on the same destination:

A system where code is no longer written ahead of time, but generated on demand— and safely executed at scale.

The rest is just implementation detail.


And yes, the implementation details are doing most of the work.