Process Analytics
The admin observability section surfaces execution metrics, AI usage data, and failed execution management. Access it via Administration in the top navigation bar.
There are three pages under Observability:
| Page | Purpose |
|---|---|
| Process Analytics | Workflow execution metrics — counts, durations, error hotspots |
| AI Analytics | AI model usage, token consumption, cost breakdown |
| Dead Letter Queue | Failed node executions awaiting retry or discard |
Access requires the ORG_ADMIN or SUPER_ADMIN role.
Process Analytics
Path: /admin/process-analytics

Date Range
A date range dropdown at the top of the page controls all data on the page (default: 30 days).
Summary Cards
Four KPI cards show the key metrics for the selected period:
| Card | What it shows |
|---|---|
| Total Executions | All workflow runs in the period |
| Avg Duration | Mean execution time of completed processes |
| Success Rate | Percentage of completed processes |
| Error Rate | Percentage of failed (aborted) processes |
Execution Timeline Chart
A line chart showing completed vs. errored execution counts per day. Use this to correlate deployment events with changes in error rates.
Node Duration Chart
A horizontal bar chart showing the top 10 slowest nodes by average execution time (in seconds).
Slowest Nodes Table
A paginated table (5 items per page) ranking nodes by average execution time:
| Column | Description |
|---|---|
| Node | Node name |
| Type | Node type badge |
| Avg Duration | Mean execution time |
| Executions | Total run count in the period |
Click any row to expand it and see a nested table of individual executions for that node — showing process name, status, duration, and when it ran.
Error Hotspots Table
A paginated table (10 items per page) ranking nodes by error count:
| Column | Description |
|---|---|
| Node Type | Type of the failing node |
| Node Name | Display name |
| Error Count | Number of errors in the period |
Click any row to expand it and see the individual error occurrences — showing process name, error message, duration, and time.
Recent Executions Table
A paginated table (10 items per page) of recent process instances:
| Column | Description |
|---|---|
| Workflow | Process name |
| State | Active / Waiting / Completed / Aborted (color-coded) |
| Started | Start timestamp |
| Duration | Execution time |
Click any row to expand it and see the node-level breakdown plus process variables for that execution.
AI Analytics
Path: /admin/ai-analytics

Date Range
Same date range selector as Process Analytics (default: 30 days).
Summary Cards
| Card | What it shows |
|---|---|
| AI Calls | Total AI Agent node executions |
| Tokens Used | Combined prompt + completion tokens |
| Total Cost | Estimated cost (in dollars) |
| Success Rate | Percentage of successful AI calls |
AI Cost & Calls Trend Chart
A combined line chart showing daily AI cost (in cents) and call count over the selected period. Useful for spotting days with unusually high AI spend.
Tokens by Model Chart
A horizontal bar chart showing total token consumption per AI model. Use this to understand which model drives the most usage.
AI Cost by Organization Chart
(Super Admins only) A horizontal bar chart showing AI cost ranked by organization. Useful for billing attribution across tenants.
Model Comparison Table
A paginated table (10 items per page) breaking down performance per model:
| Column | Description |
|---|---|
| Model | Model identifier (e.g., gpt-4o, claude-3-5-sonnet) |
| Calls | Number of executions using this model |
| Tokens | Total tokens consumed |
| Cost | Estimated cost |
| Avg Time | Mean response latency |
Click any model row to expand it and see a nested table of individual AI task executions — showing process name, task description (truncated), tokens, cost, latency, tools used, and time.
Dead Letter Queue
Path: /admin/dead-letter-queue (listed under Observability in the admin sidebar)
The Dead Letter Queue (DLQ) holds node executions that failed and could not be automatically recovered. Each item represents a node that the workflow engine was unable to complete.
Stats Cards
Four summary counts at the top of the page:
| Card | Meaning |
|---|---|
| Total Items | All items in the DLQ |
| Failed | Items awaiting retry or discard |
| Retrying | Items currently being retried |
| Discarded | Items permanently discarded |
Filtering
- Search — filter by node name, error message, or process ID (debounced)
- Status dropdown — filter by: All / Failed / Retrying / Discarded
DLQ Table
| Column | Description |
|---|---|
| Node | Node type and ID |
| Error | Truncated error message (hover for full text) |
| Retries | Number of retry attempts made |
| Status | Failed (red) / Retrying (yellow) / Discarded (gray) |
| Created | When the item entered the DLQ |
| Actions | Retry and Discard buttons |
Click any row to open the detail modal, which shows:
- Full node information (type, ID, retry count, status)
- Process information (instance ID, node instance ID, process meta ID — each copyable)
- Full error message and stack trace (expandable)
- Timestamps (created, updated)
Retry
Click Retry Now to re-trigger the failed node execution. A confirmation dialog appears before the retry is sent. After a successful retry, the item is removed from the Failed list.
Discard
Click Discard to permanently mark the item as discarded. A confirmation dialog warns that the associated process instance remains in its current state. Use discard when a failure is unrecoverable or no longer relevant.
Refresh
Click the Refresh button in the page header to reload the DLQ list from the server.