Production Deployment¶
This guide covers what you need to run FlowDrop reliably in production: queue workers, PHP configuration, snapshot management, and monitoring.
Queue Workers¶
Asynchronous and StateGraph workflows depend on Drupal's queue system. In production, do not rely solely on Drupal cron for queue processing — cron can be delayed or blocked by other processes.
FlowDrop uses two queues:
| Queue | Purpose |
|---|---|
flowdrop_runtime_pipeline_execution |
Dispatches workflow runs to the execution engine |
flowdrop_runtime_job_execution |
Executes individual node jobs within a run |
Option 1: Supervisor (Recommended)¶
Supervisor keeps queue workers running continuously and restarts them if they crash.
Create /etc/supervisor/conf.d/flowdrop.conf:
[program:flowdrop_pipeline]
command=drush queue:run flowdrop_runtime_pipeline_execution --time-limit=300
directory=/var/www/html
user=www-data
numprocs=1
autostart=true
autorestart=true
startsecs=5
startretries=3
stdout_logfile=/var/log/supervisor/flowdrop_pipeline.log
stderr_logfile=/var/log/supervisor/flowdrop_pipeline_err.log
[program:flowdrop_jobs]
command=drush queue:run flowdrop_runtime_job_execution --time-limit=300
directory=/var/www/html
user=www-data
numprocs=2
autostart=true
autorestart=true
startsecs=5
startretries=3
stdout_logfile=/var/log/supervisor/flowdrop_jobs.log
stderr_logfile=/var/log/supervisor/flowdrop_jobs_err.log
Apply and start:
supervisorctl reread
supervisorctl update
supervisorctl start flowdrop_pipeline flowdrop_jobs
Option 2: systemd¶
Create /etc/systemd/system/flowdrop-worker@.service:
[Unit]
Description=FlowDrop queue worker: %i
After=network.target mysql.service
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/drush queue:run %i --time-limit=300
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
Enable the two workers:
systemctl enable --now flowdrop-worker@flowdrop_runtime_pipeline_execution
systemctl enable --now flowdrop-worker@flowdrop_runtime_job_execution
Option 3: Cron (Minimum Viable)¶
If dedicated workers are not possible, ensure Drupal cron runs frequently:
* * * * * www-data /usr/bin/drush --root=/var/www/html cron
Cron processes queue items but may lag behind if many workflows are firing simultaneously.
Scaling Workers¶
For high-throughput sites (many concurrent workflow executions), increase numprocs on the job execution worker:
[program:flowdrop_jobs]
numprocs=4 # Run 4 parallel job workers
Each worker handles one job at a time, so more workers = more parallel node executions.
PHP Configuration¶
FlowDrop node processors execute within PHP's normal request/worker context. For workflows with slow external API calls or large data volumes, adjust:
| Setting | Recommended | Notes |
|---|---|---|
memory_limit |
256M–512M |
Large entity serialization and data transformations can be memory-intensive |
max_execution_time |
0 (for CLI) |
Queue workers should not have a time limit; only applies to synchronous execution in web requests |
default_socket_timeout |
60 |
Prevents hung connections to external services from blocking workers |
For the web context (synchronous execution), keep max_execution_time at the server default (typically 30–60s) — very long synchronous workflows should use the asynchronous orchestrator instead.
Snapshot Storage¶
Workflow execution snapshots accumulate over time. Each StateGraph run creates a checkpoint snapshot in the database. Without regular cleanup, this table will grow unboundedly.
Scheduled Cleanup¶
Navigate to Administration > FlowDrop > Snapshots > Cleanup (/admin/flowdrop/snapshots/cleanup) to configure retention.
Alternatively, add snapshot cleanup to your cron jobs:
# Run as part of Drupal cron (automatic if cron is configured)
drush cron
# Or manually trigger snapshot cleanup
drush php:eval "\Drupal::service('flowdrop_runtime.snapshot_cleanup')->cleanup();"
Recommended Retention Policy¶
- Development: Keep all snapshots (useful for debugging).
- Production: Purge snapshots older than 30 days for completed pipelines.
- High-volume sites: Purge snapshots older than 7 days.
Pipeline and Job Data¶
Pipeline and job entities also accumulate over time. These are content entities and can be deleted via the admin UI or via Drush:
# Delete completed pipelines older than 90 days (example — adjust as needed)
drush php:eval "
\$storage = \Drupal::entityTypeManager()->getStorage('flowdrop_pipeline');
\$ids = \$storage->getQuery()
->condition('status', 'completed')
->condition('created', strtotime('-90 days'), '<')
->accessCheck(FALSE)
->execute();
\$storage->delete(\$storage->loadMultiple(\$ids));
echo count(\$ids) . ' pipelines deleted.';
"
Monitoring Queue Health¶
Check queue depth to detect backlogs:
drush queue:list
If items are accumulating in FlowDrop queues, verify workers are running:
supervisorctl status flowdrop_pipeline flowdrop_jobs
# or
systemctl status flowdrop-worker@flowdrop_runtime_pipeline_execution
Set up an alerting rule if either queue exceeds a threshold (e.g., > 100 items) to detect stuck workers.
Trigger Load Considerations¶
Entity triggers fire on every matching entity save. For content-heavy sites:
- Avoid synchronous triggers on high-frequency entity saves (e.g., node view counts, statistics). Use asynchronous orchestrator.
- Use bundle filtering in trigger configuration to limit scope — triggering on
node(all bundles) is broader thannode:article. - Entity presave triggers run synchronously during the save operation. Keep presave workflows fast (< 100ms). Any external API calls in presave workflows will slow down content editing for editors.
High-Availability Deployments¶
FlowDrop queue workers are stateless — you can run them on any node in a load-balanced cluster. Ensure:
- All nodes share the same database (standard Drupal requirement).
- All nodes share the same file system or object storage for any file-handling nodes.
- Queue workers on each node process the same queues — Drupal's queue system handles distributed locking.
Next Steps¶
- Monitoring Workflows — Admin UI for pipeline and job status
- Troubleshooting — Common operational issues
- Configuration Management — Deploying workflow definitions