Skip to main content

AI Task 🤖

Calls a Large Language Model to process text, classify data, generate responses, or orchestrate tool use.

Node type: aiTask Category: AI Actor: aiTask (1 thread)


Description​

The AI Task is the core node for LLM-powered processing. It sends a prompt to a configured AI provider (OpenAI, Anthropic, Google Gemini, Azure OpenAI, or Ollama) and captures the response as workflow variables.

The AI Task supports:

  • Simple generation — single prompt/response cycle
  • Multi-turn conversation — maintains chat history across executions (via Memory)
  • Tool use (function calling) — connect Tool nodes to extend the AI's capabilities
  • A2UI generation — AI dynamically generates interactive UI for chat-based workflows
  • Streaming — stream tokens to the browser in real time via SSE

Properties​

Screenshot

Screenshot coming soon — properties-panel-ai-task.png

PropertyTypeRequiredDescription
aiProviderConnectionIdstringYesID of the AI Provider integration (OpenAI, Anthropic, etc.)
systemPrompttextareaNoSystem message sent to the LLM. Supports {varName} references
prompttextareaYesUser message/prompt. Supports {varName} references
maxMessagesnumberNoConversation history limit for multi-turn sessions. Default: no limit
streamToChatcheckboxNoStream AI tokens to the chat UI in real time (requires A2UI/chat workflow)
enableA2UIcheckboxNoAllow the AI to generate dynamic UI components (A2UI)
forceAwaitInputcheckboxNoPause execution after AI response and wait for user input

Inputs​

All workflow variables are available in systemPrompt and prompt using {varName} syntax:

systemPrompt: "You are a support agent for {companyName}.
The customer's name is {customerName} and their tier is {accountTier}."

prompt: "Customer message: {customerMessage}
Please classify this as: billing, technical, account, or general."

Outputs​

The AI Task stores the LLM response in the following variables:

VariableTypeDescription
{aiResponse}stringThe raw text response from the LLM
{aiTaskOutput}objectStructured output if the AI returned JSON

If the prompt instructs the AI to return structured JSON (see example below), the response is automatically parsed and individual fields become workflow variables.

Structured Output Example​

Prompt:

Classify this support ticket and return JSON:
{
"classification": "<billing|technical|account|general>",
"priority": "<low|medium|high|critical>",
"summary": "<one sentence summary>"
}

Ticket: {customerMessage}

The AI response is automatically parsed. Downstream nodes can access {classification}, {priority}, and {summary} as individual variables.


Tool Use (Function Calling)​

Connect Tool nodes to the AI Task using toolConnection links. The AI Task orchestrates tool use automatically:

  1. AI Task receives the prompt
  2. If the LLM decides to use a tool, the engine executes the connected Tool node
  3. Tool result is fed back to the LLM
  4. LLM generates a final response after tool use
[AI Task] ──toolConnection──► [Tool: Knowledge Base]
──toolConnection──► [Tool: REST API]
──toolConnection──► [Tool: SQL]

The AI orchestrates which tools to call and when — you only connect them, the LLM decides the order and parameters.


A2UI (Dynamic UI Generation)​

When enableA2UI is checked, the AI Task can generate dynamic interactive UI components that are rendered in the chat interface. The LLM generates A2UI component descriptors as part of its response.

Example AI response generating a form:

{
"type": "A2UICard",
"title": "Confirm Your Details",
"children": [
{ "type": "A2UIText", "text": "Please review your order" },
{ "type": "A2UIButton", "label": "Confirm", "action": { "name": "confirm_order" } },
{ "type": "A2UIButton", "label": "Cancel", "action": { "name": "cancel_order" } }
]
}

See the A2UI Components guide for the full component catalog.


Streaming​

When streamToChat is enabled, the AI Task streams tokens to the browser via WebSocket (/ws/chat/{processInstanceId}) as the LLM generates the response. This provides a real-time typing effect in chat-based workflows.

Streaming requires the workflow to be running in a chat context (e.g., the Public Workflow or Form Builder interface).


Connections​

ConnectionDirectionDescription
sequenceFlowincomingNormal execution path
toolConnectionoutgoingConnects to Tool nodes (AI decides usage)
successFlowoutgoingTaken after successful AI response
errorFlowoutgoingTaken if AI provider returns an error
timeoutFlowoutgoingTaken if the LLM call exceeds the timeout
sequenceFlowoutgoingDefault outgoing path

Example: Support Ticket Classification​

{
"nodeId": "classify-1",
"name": "Classify Support Ticket",
"nodeType": "aiTask",
"properties": {
"aiProviderConnectionId": "int_openai_prod",
"systemPrompt": "You are a support ticket classifier for {companyName}. Always respond with valid JSON only.",
"prompt": "Classify this support ticket:\n\n{customerMessage}\n\nRespond with:\n{\"classification\": \"billing|technical|account|general\", \"priority\": \"low|medium|high\", \"needsHuman\": true|false}"
},
"timeout": {
"duration": 30,
"durationUom": "SECONDS",
"action": "FAIL"
}
}

Supported AI Providers​

Configure the AI provider via an AI Provider Integration:

ProviderModels
OpenAIgpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
Anthropicclaude-opus-4, claude-sonnet-4, claude-haiku-4
Google Geminigemini-2.0-flash, gemini-1.5-pro
Azure OpenAIAll Azure-deployed models
OllamaAny locally deployed model

The model is configured in the integration — the AI Task node just references the integration ID.


Error Handling​

ScenarioBehavior
LLM API error (5xx)Takes errorFlow if connected; otherwise marks node as Failed
Rate limit from LLM providerRetried per node retry configuration
Timeout exceededTakes timeoutFlow if connected; otherwise marks node as Timed Out
Invalid JSON in structured responseRaw text returned in {aiResponse}, parse error logged

Best Practices​

  • Always set a timeout (30–60 seconds) — LLM calls can be slow under load
  • Use structured JSON prompts for data extraction — parse individual fields as workflow variables
  • Keep systemPrompt static and put dynamic data in prompt
  • Use the {env.OPENAI_API_KEY} pattern via secrets — never hardcode API keys in prompts
  • Enable retry (2–3 attempts) for production workflows to handle transient LLM errors
  • Use streamToChat only in interactive chat workflows — it has no effect in background executions