LogBus Pipelines vs Dynamic Workflows
Two ways to run code you didn't write ahead of time
TL;DR
- Cloudflare Dynamic Workflows = durable, step-based execution over time
- LogBus pipelines = continuous transformation of streaming data
They look different, but they’re converging on the same idea:
A runtime for code that is generated on demand (often by humans, increasingly by agents)
The setup
Every few years, we reinvent how code runs:
- First: servers
- Then: containers
- Then: serverless
- Now: agent-executed code
And that last one is where things get interesting.
Two systems that sit on opposite ends of this spectrum:
- Cloudflare Dynamic Workflows
- LogBus pipelines (your friendly neighborhood signal processor)
At first glance, they don’t overlap much. One is orchestration. The other is observability.
But squint a bit, and they start to rhyme.
What Dynamic Workflows actually are
Cloudflare Dynamic Workflows are essentially:
“Run this program, reliably, even if it takes days.”
A workflow looks like:
-
A series of steps
-
Each step:
- runs code
- can retry
- can sleep
- can wait for external events
Under the hood:
- Execution is persisted
- State is managed for you
- Code can be supplied at runtime (multi-tenant style)
Think:
- approvals
- provisioning flows
- long-running agent tasks
This is not just “run a function.” This is “run a story.”
What LogBus pipelines are
LogBus pipelines are closer to:
“Continuously interpret a firehose of reality.”
They:
- ingest logs/events (append-only)
- transform and enrich
- derive signals (metrics, anomalies, alerts)
- roll up into storage (Parquet, DuckDB, etc.)
Key properties:
- always-on
- unbounded
- data-driven
Think:
- “what just happened?”
- “what pattern is emerging?”
- “should I care?”
If workflows are a story, pipelines are a stream of consciousness.
Where they unexpectedly overlap
1. User-defined code on shared infrastructure
Both systems exist because of the same uncomfortable truth:
You don’t control the code you need to run.
- Workflows: users provide
run(event, step) - LogBus: users define transforms, filters, plugins
Same problem space:
- sandboxing
- isolation
- cost control
- observability
Different clothes, same laundry.
2. Primitives are the product
Both systems win or lose based on what they expose.
Workflows primitives:
- do work
- sleep
- wait for event
Pipeline primitives:
- filter
- transform
- aggregate
- derive
In both cases:
The platform is just a thin layer around a really good set of verbs.
3. Control plane vs execution plane
Both introduce indirection:
-
Workflows:
- dispatcher → tenant code → execution engine
-
LogBus:
- ingest → pipeline graph → plugin execution
This split is what enables:
- multi-tenancy
- runtime code injection
- scaling without chaos
4. Suspiciously good fit for agents
This is where things stop being coincidental.
Modern LLMs are much better at:
- writing code than
- orchestrating tools
So both systems become:
- a target runtime for generated code
Workflows:
“Here’s a plan. Execute it over time.”
LogBus:
“Here’s how to interpret signals. Apply it continuously.”
Where they fundamentally differ
1. Finite vs infinite
| Workflows | Pipelines | |
|---|---|---|
| Lifecycle | Start → End | Never ends |
| Shape | Sequence of steps | Continuous flow |
| Trigger | Event | Always on |
This is the big one.
Workflows finish. Pipelines exist.
2. Time: control vs data
Workflows treat time like a control mechanism:
- sleep 24 hours
- wait for event
Pipelines treat time like data:
- windowing
- aggregation
- ordering
One moves through time The other measures time
3. State philosophy
Workflows:
- state is implicit
- managed by the engine
- scoped to a workflow instance
Pipelines:
- state is explicit
- often external (storage layers)
- tied to aggregates and windows
4. Reliability model
Workflows:
- step-level retries
- strong durability
- idempotent execution
Pipelines:
- at-least-once processing
- replay via logs
- eventual consistency
5. What they’re for
Workflows:
- orchestration
- human-in-the-loop processes
- long-running automation
Pipelines:
- observability
- analytics
- security detection
- signal extraction
The deeper connection
Here’s the part that’s worth remembering:
Workflows are programmable control flow over time. Pipelines are programmable data flow over signals.
Or, if you prefer a table:
| Dimension | Workflows | Pipelines |
|---|---|---|
| Axis | Time | Data |
| Unit | Workflow instance | Event |
| Model | Imperative | Dataflow |
| Lifetime | Finite | Infinite |
| Goal | Do work | Extract signal |
The convergence (this is the fun part)
They’re moving toward each other.
Workflows → becoming dataflow-ish
- parallel steps
- DAG-like execution
- event-driven branching
Pipelines → becoming workflow-ish
- conditional logic
- stateful decisions
- agent-generated transformations
Both are quietly evolving into:
execution substrates for agent-generated programs
A mildly spicy take
- Dynamic Workflows = serverless orchestration for agents
- LogBus = programmable signal processing for reality
One answers:
“What should happen next?”
The other answers:
“What just happened, and does it matter?”
Why this matters (especially for you)
If you’re building something like LogBus:
You’re not “just” building a log pipeline.
You’re building:
A runtime where agents can continuously interpret the world.
And that’s a very different category than:
- logging tools
- metrics systems
- even traditional observability stacks
It’s closer to:
- real-time cognition systems
- programmable perception layers
Closing thought
Dynamic Workflows and LogBus pipelines start from opposite ends:
- one begins with control flow
- the other with data flow
But they’re converging on the same destination:
A system where code is no longer written ahead of time, but generated on demand— and safely executed at scale.
The rest is just implementation detail.
And yes, the implementation details are doing most of the work.