Production-grade workflow automation.
No drag-and-drop required.
Build, version, and deploy your workflows with YAML.
workflow:
name: ai-assistant
input:
type: http_webhook
steps:
- type: openai_completion
api_key: ${env:OPENAI_KEY}
model: gpt-4
prompt: "Analyse: ${input.text}"
output_to: analysis
- type: slack_webhook
webhook_url: ${env:SLACK_URL}
text_template: "${analysis.content}" What you're missing with drag-and-drop tools
Zapier, Make.com, and n8n are great for prototyping, but they lack the features developers need. Here's what's missing when you choose a visual builder over code.
No Version Control
They make it impossible to set up proper environments or track changes. Your workflows live in their UI, not your repo.
Not Developer-First
Their drag-and-drop interface is great for demos, but limited customisation means you hit walls fast.
Complex Deployment
You're stuck clicking through their UIs to deploy changes. No CI/CD, no simple deployment workflow.
The solution? Workflows as Code
Write your workflow in YAML. Deploy with one command. No infrastructure to configure, no queues to manage, no clicking through UIs. From code to production instantly.
From YAML to production in seconds
Zero setup, instant scale
Deploy however you prefer
Version Control & Git Workflow
Treat workflows like any other code. Track changes, review in PRs, and roll back with confidence. Full version history built in.
$ etlr restore --id abc123 --version 2 Git-Friendly YAML
Store workflows in your repo. Review changes in PRs like any other code.
Automatic Versioning
Every deployment creates a new version. Full history in the dashboard.
One-Click Rollback
Restore any previous version instantly via UI or CLI command.
Code Review Ready
Treat workflow changes like code. Approve, comment, and merge with confidence.
Production-Ready Observability
Monitor, debug, and optimise with complete visibility into every workflow run. Built-in metrics, logs, and traces.
Real-Time Metrics
Track execution counts, success rates, and performance trends across all workflows.
Structured Logs
Detailed logs for every step. Search, filter, and debug with complete visibility.
Execution Traces
Step-by-step timeline showing exactly what happened and how long it took.
Error Tracking
Automatic error detection with stack traces and context for fast debugging.
Simple, transparent pricing
Our pricing is straight forward, credit-based monthly billing.
Free
- 100 credits included
- All integrations
- Custom python integrations
- Community support
Professional
- 10,000 credits included
- All integrations
- Custom python integrations
- Priority support
Enterprise
- Unlimited credits
- All integrations
- Custom python integrations
- Dedicated support
- SLA guarantees
- Bespoke integrations
How Credits Work
Simple, transparent pricing. One credit equals one workflow execution, regardless of steps.
1 Credit = 1 Execution
Every workflow run counts as one credit, no matter how many steps it contains.
Monthly Reset
Credits reset at the start of each billing cycle. Unused credits don't roll over.
A typical AI workflow in ETLR:
This example shows how ETLR handles a common scenario: receiving webhook data, enriching it, and forwarding it to external services.
Receive webhook events
Accept HTTP webhooks from any source - APIs, databases, or third-party services.
Add metadata automatically
Enrich your data with timestamps, unique IDs, and tracking information.
Transform with custom code
Run your Python functions to normalise, validate, or transform data exactly how you need it.
Send to external services
Forward processed data to databases, APIs, or notification services with built-in retry logic.
workflow:
name: "add_timestamp_normalise_and_post_user"
input:
type: http_webhook
steps:
- type: add_timestamp
format: ISO-8601
field: timestamp
- type: python_function
code: |
def process(event):
event['full_name'] = f"{event['first_name']} {event['last_name']}"
return event
handler: process
- type: http_request
url: "https://example.org/users"
headers:
x-api-key: ${env:API_KEY}
method: POSTReal-time health monitoring:
This example demonstrates a cron workflow that monitors etlr.io every minute and sends Discord alerts for any non-200 responses.
Schedule health checks
Run every minute with start_now enabled to begin monitoring immediately.
Ping the endpoint
Make an HTTP call with status tracking and 5-second timeout, injecting the response into state.
Filter non-200 responses
Only continue the workflow when the status code is not 200.
Alert via Discord
Send a webhook notification with status code and latency details.
workflow:
name: "etlr_healthcheck"
input:
type: cron
cron: '*/1 * * * *'
steps:
- type: http_call
url: https://etlr.io
include_status: true
inject: http
timeout: 5
- type: filter
groups:
- conditions:
- field: http.status
op: ne
value: 200
- type: discord_webhook
webhook_url: ${env:WEBHOOK_URL}
content_template: >-
Warning: etlr.io ping status=${http.status}
latency=${http.duration_ms}ms