AI Task 🤖
Calls a Large Language Model to process text, classify data, generate responses, or orchestrate tool use.
Node type: aiTask
Category: AI
Actor: aiTask (1 thread)
Description​
The AI Task is the core node for LLM-powered processing. It sends a prompt to a configured AI provider (OpenAI, Anthropic, Google Gemini, Azure OpenAI, or Ollama) and captures the response as workflow variables.
The AI Task supports:
- Simple generation — single prompt/response cycle
- Multi-turn conversation — maintains chat history across executions (via Memory)
- Tool use (function calling) — connect Tool nodes to extend the AI's capabilities
- A2UI generation — AI dynamically generates interactive UI for chat-based workflows
- Streaming — stream tokens to the browser in real time via SSE
Properties​
Screenshot coming soon — properties-panel-ai-task.png
| Property | Type | Required | Description |
|---|---|---|---|
aiProviderConnectionId | string | Yes | ID of the AI Provider integration (OpenAI, Anthropic, etc.) |
systemPrompt | textarea | No | System message sent to the LLM. Supports {varName} references |
prompt | textarea | Yes | User message/prompt. Supports {varName} references |
maxMessages | number | No | Conversation history limit for multi-turn sessions. Default: no limit |
streamToChat | checkbox | No | Stream AI tokens to the chat UI in real time (requires A2UI/chat workflow) |
enableA2UI | checkbox | No | Allow the AI to generate dynamic UI components (A2UI) |
forceAwaitInput | checkbox | No | Pause execution after AI response and wait for user input |
Inputs​
All workflow variables are available in systemPrompt and prompt using {varName} syntax:
systemPrompt: "You are a support agent for {companyName}.
The customer's name is {customerName} and their tier is {accountTier}."
prompt: "Customer message: {customerMessage}
Please classify this as: billing, technical, account, or general."
Outputs​
The AI Task stores the LLM response in the following variables:
| Variable | Type | Description |
|---|---|---|
{aiResponse} | string | The raw text response from the LLM |
{aiTaskOutput} | object | Structured output if the AI returned JSON |
If the prompt instructs the AI to return structured JSON (see example below), the response is automatically parsed and individual fields become workflow variables.
Structured Output Example​
Prompt:
Classify this support ticket and return JSON:
{
"classification": "<billing|technical|account|general>",
"priority": "<low|medium|high|critical>",
"summary": "<one sentence summary>"
}
Ticket: {customerMessage}
The AI response is automatically parsed. Downstream nodes can access {classification}, {priority}, and {summary} as individual variables.
Tool Use (Function Calling)​
Connect Tool nodes to the AI Task using toolConnection links. The AI Task orchestrates tool use automatically:
- AI Task receives the prompt
- If the LLM decides to use a tool, the engine executes the connected Tool node
- Tool result is fed back to the LLM
- LLM generates a final response after tool use
[AI Task] ──toolConnection──► [Tool: Knowledge Base]
──toolConnection──► [Tool: REST API]
──toolConnection── ► [Tool: SQL]
The AI orchestrates which tools to call and when — you only connect them, the LLM decides the order and parameters.
A2UI (Dynamic UI Generation)​
When enableA2UI is checked, the AI Task can generate dynamic interactive UI components that are rendered in the chat interface. The LLM generates A2UI component descriptors as part of its response.
Example AI response generating a form:
{
"type": "A2UICard",
"title": "Confirm Your Details",
"children": [
{ "type": "A2UIText", "text": "Please review your order" },
{ "type": "A2UIButton", "label": "Confirm", "action": { "name": "confirm_order" } },
{ "type": "A2UIButton", "label": "Cancel", "action": { "name": "cancel_order" } }
]
}
See the A2UI Components guide for the full component catalog.
Streaming​
When streamToChat is enabled, the AI Task streams tokens to the browser via WebSocket (/ws/chat/{processInstanceId}) as the LLM generates the response. This provides a real-time typing effect in chat-based workflows.
Streaming requires the workflow to be running in a chat context (e.g., the Public Workflow or Form Builder interface).
Connections​
| Connection | Direction | Description |
|---|---|---|
sequenceFlow | incoming | Normal execution path |
toolConnection | outgoing | Connects to Tool nodes (AI decides usage) |
successFlow | outgoing | Taken after successful AI response |
errorFlow | outgoing | Taken if AI provider returns an error |
timeoutFlow | outgoing | Taken if the LLM call exceeds the timeout |
sequenceFlow | outgoing | Default outgoing path |
Example: Support Ticket Classification​
{
"nodeId": "classify-1",
"name": "Classify Support Ticket",
"nodeType": "aiTask",
"properties": {
"aiProviderConnectionId": "int_openai_prod",
"systemPrompt": "You are a support ticket classifier for {companyName}. Always respond with valid JSON only.",
"prompt": "Classify this support ticket:\n\n{customerMessage}\n\nRespond with:\n{\"classification\": \"billing|technical|account|general\", \"priority\": \"low|medium|high\", \"needsHuman\": true|false}"
},
"timeout": {
"duration": 30,
"durationUom": "SECONDS",
"action": "FAIL"
}
}
Supported AI Providers​
Configure the AI provider via an AI Provider Integration:
| Provider | Models |
|---|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo |
| Anthropic | claude-opus-4, claude-sonnet-4, claude-haiku-4 |
| Google Gemini | gemini-2.0-flash, gemini-1.5-pro |
| Azure OpenAI | All Azure-deployed models |
| Ollama | Any locally deployed model |
The model is configured in the integration — the AI Task node just references the integration ID.
Error Handling​
| Scenario | Behavior |
|---|---|
| LLM API error (5xx) | Takes errorFlow if connected; otherwise marks node as Failed |
| Rate limit from LLM provider | Retried per node retry configuration |
| Timeout exceeded | Takes timeoutFlow if connected; otherwise marks node as Timed Out |
| Invalid JSON in structured response | Raw text returned in {aiResponse}, parse error logged |
Best Practices​
- Always set a timeout (30–60 seconds) — LLM calls can be slow under load
- Use structured JSON prompts for data extraction — parse individual fields as workflow variables
- Keep
systemPromptstatic and put dynamic data inprompt - Use the
{env.OPENAI_API_KEY}pattern via secrets — never hardcode API keys in prompts - Enable retry (2–3 attempts) for production workflows to handle transient LLM errors
- Use
streamToChatonly in interactive chat workflows — it has no effect in background executions