- No-code using n8n’s built‑in AI integrations (e.g., OpenAI Chat).
- Low-code using the generic HTTP Request node to call any AI API (OpenAI, Azure OpenAI, Anthropic, etc.).
- (Optional) Pro-code: build a custom Community Node in TypeScript that encapsulates your AI logic.
I’ll also include a ready-to-import workflow JSON and a TypeScript skeleton for a custom node, so you can go from idea to production quickly.
Why an “AI node” in n8n?
n8n is a workflow automation tool that lets you chain nodes (steps) together—triggers, transformations, API calls, and actions. An “AI node” typically:
- Accepts input (prompt + context).
- Calls an LLM API (e.g., OpenAI).
- Returns structured output (text, JSON) for downstream nodes (routing, storage, notifications).
This becomes powerful when you automate: daily report summaries, ticket classification, code reviews, knowledge base Q\&A, or data extraction from logs—workflows your team can reuse.
Prerequisites
- An n8n instance (self-hosted Docker or n8n Cloud).
- An AI provider API key (e.g., OpenAI).
- Basic familiarity with n8n nodes and credentials.
Security tip: Store your API keys in n8n Credentials (never hard-code keys in nodes or workflows). Use environment variables for self-hosted: N8N_ENCRYPTION_KEY, OPENAI_API_KEY, etc.
Approach A: Use the built-in OpenAI Chat node (No-code)
n8n ships with OpenAI nodes that are straightforward to configure.
Steps:
- Add Trigger: For demo, use Manual Trigger.
- Add Node: Search for OpenAI → Chat.
- Credentials: Create OpenAI credentials with your API key.
- Model: Select
gpt-4o or gpt-4.1 (or any supported model in your setup). - Messages:
- System: “You are a professional assistant that responds concisely.”
- User: Use an expression to pass dynamic input, e.g.
{{$json.prompt}}.
- Input variable: Add a Set node before OpenAI to define
prompt, or map from a webhook/email, etc. - Output: The node returns
data.choices[0].message.content (or response depending on node version). Pass it to subsequent nodes (e.g., email, Slack, DB insert).
Pros: Fast, simple, minimal configuration\ Cons: Tied to OpenAI node capabilities; less control than raw API
Approach B: Use HTTP Request node for any AI provider (Low-code)
This gives you fine-grained control and works with providers that may not have a dedicated n8n node.
Example with OpenAI Chat Completions API:
Manual Trigger (start node)
Set node → create fields:
prompt: “Summarize these NOC logs and flag potential network issues…”context: Logs or ticket payload (string or JSON).
HTTP Request node:
(Optional) IF node: Check for errors or empty responses.
(Optional) Code / Function node: Parse structured sections, extract JSON with regex, convert to n8n fields.
Pros: Works with any AI API (OpenAI, Azure OpenAI, Anthropic, local models behind an HTTP endpoint)\ Cons: You must handle headers, body schema, rate limits, and errors
Example: Ready-to-import workflow (JSON)
What this does:
- Takes a prompt and context (could be logs or ticket text).
- Calls the OpenAI Chat API via HTTP Request.
- Extracts the assistant’s message and returns it.
Copy and import into n8n (top-right menu → Import from file).\ Replace the credential placeholder with your credential name or set the header inline with your key for testing.
{
"name": "AI Chat via HTTP (OpenAI) - Example",
"nodes": [
{
"parameters": {},
"id": "ManualTrigger1",
"name": "Manual Trigger",
"type": "n8n-nodes-base.manualTrigger",
"typeVersion": 1,
"position": [260, 300]
},
{
"parameters": {
"keepOnlySet": true,
"values": {
"string": [
{
"name": "prompt",
"value": "Summarize these NOC logs and list top 3 likely root causes with confidence scores."
},
{
"name": "context",
"value": "2025-12-31 06:18: Degraded throughput on WAN link BLR-Edge-03; BGP flap observed…"
}
]
}
},
"id": "SetPrompt2",
"name": "Set Prompt & Context",
"type": "n8n-nodes-base.set",
"typeVersion": 2,
"position": [460, 300]
},
{
"parameters": {
"authentication": "none",
"url": "https://api.openai.com/v1/chat/completions",
"options": {
"responseFormat": "json"
},
"headerParametersUi": {
"parameter": [
{
"name": "Authorization",
"value": "Bearer {{ $env.OPENAIAPIKEY }}"
},
{
"name": "Content-Type",
"value": "application/json"
}
]
},
"jsonParameters": true,
"optionsUi": {},
"queryParametersUi": {
"parameter": []
},
"bodyParametersJson": "={\n \"model\": \"gpt-4o\",\n \"temperature\": 0.2,\n \"messages\": [\n { \"role\": \"system\", \"content\": \"You are a network ops assistant. Be precise and concise.\" },\n { \"role\": \"user\", \"content\": \"Prompt: {{$json.prompt}}\nContext: {{$json.context}}\" }\n ]\n}"
},
"id": "HTTPRequest3",
"name": "OpenAI Chat API",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4,
"position": [700, 300]
},
{
"parameters": {
"functionCode": "const res = items[0].json;\ntry {\n const content = res.choices?.[0]?.message?.content ?? '';\n return [{ json: { ai_output: content } }];\n} catch (e) {\n return [{ json: { error: e.message || String(e), raw: res } }];\n}"
},
"id": "Function4",
"name": "Extract AI Output",
"type": "n8n-nodes-base.function",
"typeVersion": 1,
"position": [940, 300]
}
],
"connections": {
"Manual Trigger": { "main": [ [ { "node": "Set Prompt & Context", "type": "main", "index": 0 } ] ] },
"Set Prompt & Context": { "main": [ [ { "node": "OpenAI Chat API", "type": "main", "index": 0 } ] ] },
"OpenAI Chat API": { "main": [ [ { "node": "Extract AI Output", "type": "main", "index": 0 } ] ] }
}
}
Notes:
- For production, swap
{{ $env.OPENAI_API_KEY }} with a proper n8n Credential reference (e.g., an OpenAI credential). - If you’re using Azure OpenAI, the URL and headers differ (endpoint per resource,
api-key header, api-version query).
If you want a reusable, shareable “AI node” with curated parameters (model, temperature, prompt templates), create a Community Node in TypeScript.
High-level steps:
- Scaffold a community node repo (n8n expects a specific structure).
- Define credentials (API key).
- Implement a node description (parameters, UI).
- Implement the execute method to call the provider API and return normalized output.
- Build & publish; install in n8n via Community Nodes.
Minimal TypeScript skeleton (AIChat.node.ts):
import { IExecuteFunctions } from 'n8n-workflow';
import { INodeExecutionData, INodeType, INodeTypeDescription } from 'n8n-workflow';
import axios from 'axios';
export class AIChat implements INodeType {
description: INodeTypeDescription = {
displayName: 'AI Chat',
name: 'aiChat',
group: ['transform'],
version: 1,
description: 'Call an AI chat completion endpoint',
defaults: { name: 'AI Chat' },
inputs: ['main'],
outputs: ['main'],
credentials: [
{
name: 'openAiApi',
required: true,
testedBy: 'openAiApiTest',
},
],
properties: [
{
displayName: 'Model',
name: 'model',
type: 'string',
default: 'gpt-4o',
},
{
displayName: 'Temperature',
name: 'temperature',
type: 'number',
typeOptions: { minValue: 0, maxValue: 1, step: 0.1 },
default: 0.2,
},
{
displayName: 'System Prompt',
name: 'systemPrompt',
type: 'string',
default: 'You are a helpful assistant.',
},
{
displayName: 'User Prompt',
name: 'userPrompt',
type: 'string',
default: '={{$json.prompt}}',
},
],
};
async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
const items = this.getInputData();
const model = this.getNodeParameter('model', 0) as string;
const temperature = this.getNodeParameter('temperature', 0) as number;
const systemPrompt = this.getNodeParameter('systemPrompt', 0) as string;
const userPrompt = this.getNodeParameter('userPrompt', 0) as string;
// Get credentials
const credentials = await this.getCredentials('openAiApi');
const apiKey = credentials?.apiKey as string;
if (!apiKey) throw new Error('OpenAI API key missing');
const results: INodeExecutionData[] = [];
for (let i = 0; i < items.length; i++) {
// Resolve expression for each item
const up = this.getNodeParameter('userPrompt', i) as string;
try {
const resp = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model,
temperature,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: up },
],
},
{
headers: {
'Authorization': Bearer </span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">{{responseHTML}}lt;/span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">{</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">apiKey</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">}</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">,
'Content-Type': 'application/json',
},
timeout: 30000,
},
);
const content = resp.data?.choices?.[0]?.message?.content ?? '';
results.push({ json: { ai_output: content, raw: resp.data } });
} catch (err: any) {
results.push({ json: { error: err.message ?? String(err) } });
}
}
return [results];
}
}
``
Credential file (example): OpenAiApi.credentials.ts
import { ICredentialType, INodeProperties } from 'n8n-workflow';
export class OpenAiApi implements ICredentialType {
name = 'openAiApi';
displayName = 'OpenAI API';
properties: INodeProperties[] = [
{
displayName: 'API Key',
name: 'apiKey',
type: 'string',
default: '',
},
];
}
This is a simplified illustration. In a real project:
- Use
node-fetch or built‑in HTTP helpers per n8n guidelines. - Add retries/backoff, error normalization, rate-limit handling.
- Support Response Format = JSON with schema validation.
- Consider Azure OpenAI variants (base URL, api-version).
Designing better prompts & outputs (for Ops/Network use-cases)
- System message: Set clear role/constraints—e.g., “You are a network ops assistant. Prefer bullet points and short sections; return JSON keys:
summary, root_causes, confidence.” - User/content: Feed structured context: timestamps, device names, severity, metrics (latency/packet loss), prior incidents.
- Output schema: Ask for JSON and then parse/validate via Function node:
// Example: Extract JSON block from AI output
const text = $json.ai_output || '';
const match = text.match(/\{[\s\S]*\}/);
return [{ json: match ? JSON.parse(match[0]) : { raw: text } }];
- Guardrails: Use temperature
0–0.3, add “If uncertain, say ‘insufficient data’” to reduce hallucinations.