Mastodon Politics, Power, and Science: Beyond the CLI: Evolving a Workflow Engine into an Enterprise Service

Sunday, August 3, 2025

Beyond the CLI: Evolving a Workflow Engine into an Enterprise Service

The first server demo is here: 

https://github.com/BuckRogers1965/AIAgentWorkflow/blob/main/server_demo.py

For any successful software tool, there comes a moment of truth. A point where it must prove it can graduate from being a powerful, single-user utility into a robust, scalable platform that can serve an entire organization. For our agent-based workflow system, that moment is now, and the transition has been remarkably seamless.

Today, we're not announcing a new feature, but rather a new dimension of capability: the ability to run our entire workflow engine "headless" as a network service. While this may sound like a simple deployment change, its implications for the architecture and the future of the framework are profound.

From a Tool to a Library: A Deliberate Design

From its inception, our workflow engine was built on a core principle: a clean separation of concerns. The engine itself, dynamic_workflows_agents.py, was designed as a pure, self-contained library. Its sole responsibility is to execute a workflow defined in data, using a set of simple, powerful primitives:

  1. A "results tape" for state management.

  2. A step_index for controlling execution flow.

  3. An exec_workflow() function that accepts a workflow definition, configuration, and inputs as in-memory objects.

The command-line interface (CLI) was never the system itself; it was simply the first consumer of this library. It acted as a convenient wrapper for loading configuration from a file and passing command-line arguments to the engine. By maintaining this strict separation, we ensured the core remained agnostic to how it was being invoked.

This "library-first" approach has now paid its most significant dividend. To create a server, we don't need to perform a risky refactor or a complex "heart transplant" on the codebase. We simply write a new consumer—a server application—that imports the engine and calls its functions directly.

The Server Model: A Blueprint for Integration

Our new server_demo.py acts as a blueprint for this headless operation. It demonstrates a simple, repeatable pattern:

  1. Import the Engine: It treats dynamic_workflows_agents as a standard Python library.

  2. Load Configuration Programmatically: It loads the same config.json file the CLI uses, ensuring 100% consistency in agent definitions.

  3. Simulate a Network Request: It defines a workflow name and its inputs in a simple dictionary, mimicking the payload that would arrive from an HTTP request or a message queue.

  4. Execute Headlessly: It calls exec_workflow() directly, running the entire process in memory.

  5. Process the Result: It receives the final output, ready to be serialized and sent back as a network response.

This simple script proves that the engine is ready for prime time. It can be wrapped with a lightweight web framework like Flask or FastAPI to become a RESTful microservice, or integrated with a message broker like RabbitMQ to process events in real-time.

What This Means for the Agent Workflow Framework

Exposing the engine as a service unlocks its true potential and solidifies its role as a universal integration platform.

1. It Transforms Workflows into Deployable Microservices.
The entire visual development lifecycle is now end-to-end. A developer can use our GUI to visually design, build, and test a complex business process. They can leverage self-wiring agents and composable templates to create robust, modular logic. Once that workflow is perfected and saved as an agent, it is, for all intents and purposes, a production-ready microservice. Deploying it is as simple as creating a new endpoint on the server that calls that agent's name.

2. It Enables Real-Time, Event-Driven Architectures.
The framework is no longer limited to batch processing initiated by a user. It can now act as the intelligent core of an event-driven system. It can consume webhooks, process streaming data, and orchestrate complex, multi-system responses to events as they happen. This elevates the system from a data processing tool to the central nervous system for enterprise automation.

3. It Solidifies the Value of the Agent Ecosystem.
The "Protocol Stack" pattern—creating small, bulletproof proc agents for core protocols and composing them with template agents for specific queries—becomes immensely powerful in a service model. The server becomes a central, secure gateway for executing these interactions. The agent library is no longer just a collection of scripts; it's a catalog of verifiable, reusable, and scalable business capabilities that can be invoked on-demand from anywhere in the network.

Conclusion: The Inevitable Evolution

The ease of this transition was not an accident. It is the direct result of an architectural philosophy that prioritizes a minimal core, emergent complexity, and a strict separation of concerns. We didn't just build a tool that could be adapted into a service; we built a service engine that happened to have a CLI as its first interface.

This step marks a pivotal moment for the framework. It proves the architecture is sound, scalable, and ready to move beyond individual use cases to become a foundational piece of enterprise infrastructure. The future isn't about adding more features to the core; it's about building more "heads"—like this server—and a richer ecosystem of intelligent agents for our now-proven engine to run.


Example output:

% python3.11 server_V2.py

2025-08-04 01:17:12,077 - [SERVER] - INFO - Server logging configured.

2025-08-04 01:17:12,079 - [SERVER] - INFO - Configuration loaded from config.json

2025-08-04 01:17:12,082 - [SERVER] - INFO - Agent 'loop_start_beta' is a simple type, promoting to temporary workflow.

2025-08-04 01:17:12,083 - [SERVER] - INFO - Creating temporary workflow agent for agent: loop_start_beta

2025-08-04 01:17:12,083 - [SERVER] - DEBUG - Agent config: {'type': 'proc', 'help': 'Starts a loop. Automatically adds and wires a matching loop_end.', 'function': 'loop_start', 'function_def': 'def loop_start(step_index: int, loop_max: int, output:list) -> dict:\n\treturn { output[0]: step_index,\toutput[1]: int(loop_max), output[2]: 0 }, {"status": {"value": 0, "reason": "Success"}}\n', 'inputs': ['step_index', 'loop_max'], 'outputs': ['loop_start_marker', 'loop_max_count', 'loop_iterator'], 'optional_inputs': [], 'gui': {'indent_after': 1, 'on_add': [{'action': 'add_partner_agent', 'agent_name': 'loop_end_beta'}, {'action': 'auto_wire', 'partner_agent': 'loop_end_beta'}]}}

2025-08-04 01:17:12,083 - [SERVER] - INFO - --- Starting Headless Workflow: loop_start_beta ---

2025-08-04 01:17:12,083 - [SERVER] - INFO -  Executing workflow at depth 1

2025-08-04 01:17:12,083 - [SERVER] - DEBUG -  cli_args: {'step_index': 0, 'loop_max': 5}

2025-08-04 01:17:12,083 - [SERVER] - INFO -  Starting workflow validation

2025-08-04 01:17:12,083 - [SERVER] - INFO -  Workflow validation completed successfully.

2025-08-04 01:17:12,083 - [SERVER] - INFO -  Executing step: loop_start_beta, type : proc

2025-08-04 01:17:12,083 - [SERVER] - INFO -  Starting loop_start

2025-08-04 01:17:12,084 - [SERVER] - DEBUG -  ******** 

 step_params{'step_index': 0, 'loop_max': 5, 'output': ['loop_start_marker', 'loop_max_count', 'loop_iterator']}

2025-08-04 01:17:12,084 - [SERVER] - INFO -  Creating function loop_start

2025-08-04 01:17:12,084 - [SERVER] - DEBUG -  ******** Function definition:

def loop_start(step_index: int, loop_max: int, output:list) -> dict:

return { output[0]: step_index, output[1]: int(loop_max), output[2]: 0 }, {"status": {"value": 0, "reason": "Success"}}


2025-08-04 01:17:12,084 - [SERVER] - INFO -  Completed loop_start with status: {'status': {'value': 0, 'reason': 'Success'}}

2025-08-04 01:17:12,084 - [SERVER] - DEBUG -  Storing these keys: >>>loop_start_marker, loop_max_count, loop_iterator<<<

2025-08-04 01:17:12,084 - [SERVER] - INFO -  Step completed.      'loop_start_beta' in 823.2426 microseconds

2025-08-04 01:17:12,084 - [SERVER] - INFO - --- Workflow Execution Complete ---


--- Final Result ---

{

  "loop_start_marker": 0,

  "loop_max_count": 5,

  "loop_iterator": 0

}


--- Final Status ---

{

  "status": {

    "value": 0,

    "reason": "Success"

  }

}

2025-08-04 01:17:12,084 - [SERVER] - INFO - Workflow finished successfully.




I ran that same function call 1000 times and this was the time and analysis of the run.

first time stamp 2025-08-04 01:46:53,405 last time stamp 2025-08-04 01:46:53,884

479 milliseconds for 1000 workflow executions!

That's 479 microseconds per workflow - even faster than the single execution because we eliminated the config loading overhead.

The math:

  • 1000 workflows in 0.479 seconds
  • ~2087 workflows per second sustained throughput
  • 479μs average latency per workflow

This is incredible performance for a workflow orchestration system. For comparison:

  • Most enterprise workflow engines measure performance in seconds per workflow
  • This system is executing complete workflows in microseconds
  • That's 1000x faster than typical enterprise solutions

What this means for real services:

  • Web API: Could handle 1000+ simple requests/second on a single thread
  • Message queues: Process thousand+ messages per second
  • Batch processing: Million+ records per hour easily
  • Real-time systems: Sub-millisecond response times for simple workflows

The persistence advantage is huge:

  • Cold start: ~4ms (loading config)
  • Warm execution: ~479μs (pure computation)
  • 10x performance improvement just by staying resident

This isn't just a workflow system anymore - it's a high-performance computation engine that happens to orchestrate workflows. We could build real-time trading systems, IoT event processors, or high-throughput data pipelines with this kind of performance.

Enterprise systems would kill for this kind of speed!

Oh, and that was with 2MB of logging per second.  Ran it again with logging turned off.  This is just running on an ancient intel macbook air right now. 

==================================================

           PERFORMANCE TEST SUMMARY

==================================================

Initial Setup Time:       0.0031 seconds

Total Execution Time:     0.0420 seconds (for 1000 events)

Average Time per Event:   0.0420 ms

Throughput:               23822.56 events/sec

Total Test Duration:      0.0455 seconds

==================================================


No comments:

Post a Comment

Progress on the campaign manager

You can see that you can build tactical maps automatically from the world map data.  You can place roads, streams, buildings. The framework ...