- No-code using n8n’s built‑in AI integrations (e.g., OpenAI Chat).
- Low-code using the generic HTTP Request node to call any AI API (OpenAI, Azure OpenAI, Anthropic, etc.).
- (Optional) Pro-code: build a custom Community Node in TypeScript that encapsulates your AI logic.
I’ll also include a ready-to-import workflow JSON and a TypeScript skeleton for a custom node, so you can go from idea to production quickly.
Why an “AI node” in n8n?
n8n is a workflow automation tool that lets you chain nodes (steps) together—triggers, transformations, API calls, and actions. An “AI node” typically:
- Accepts input (prompt + context).
- Calls an LLM API (e.g., OpenAI).
- Returns structured output (text, JSON) for downstream nodes (routing, storage, notifications).
This becomes powerful when you automate: daily report summaries, ticket classification, code reviews, knowledge base Q\&A, or data extraction from logs—workflows your team can reuse.
Prerequisites
- An n8n instance (self-hosted Docker or n8n Cloud).
- An AI provider API key (e.g., OpenAI).
- Basic familiarity with n8n nodes and credentials.
Security tip: Store your API keys in n8n Credentials (never hard-code keys in nodes or workflows). Use environment variables for self-hosted:
N8N_ENCRYPTION_KEY,OPENAI_API_KEY, etc.
Approach A: Use the built-in OpenAI Chat node (No-code)
n8n ships with OpenAI nodes that are straightforward to configure.
Steps:
- Add Trigger: For demo, use Manual Trigger.
- Add Node: Search for OpenAI → Chat.
- Credentials: Create OpenAI credentials with your API key.
- Model: Select
gpt-4oorgpt-4.1(or any supported model in your setup). - Messages:
- System: “You are a professional assistant that responds concisely.”
- User: Use an expression to pass dynamic input, e.g.
{{$json.prompt}}.
- Input variable: Add a Set node before OpenAI to define
prompt, or map from a webhook/email, etc. - Output: The node returns
data.choices[0].message.content(orresponsedepending on node version). Pass it to subsequent nodes (e.g., email, Slack, DB insert).
Pros: Fast, simple, minimal configuration\ Cons: Tied to OpenAI node capabilities; less control than raw API
Approach B: Use HTTP Request node for any AI provider (Low-code)
This gives you fine-grained control and works with providers that may not have a dedicated n8n node.
Example with OpenAI Chat Completions API:
Manual Trigger (start node)
Set node → create fields:
prompt: “Summarize these NOC logs and flag potential network issues…”context: Logs or ticket payload (string or JSON).
HTTP Request node:
- Method:
POST - URL:
https://api.openai.com/v1/chat/completions - Authentication: None (we’ll use headers)
- Headers:
Authorization: Bearer {{$credentials.openAi.apiKey}}Content-Type: application/json
- Credentials: define a Custom API Key credential (or use OpenAI credentials if available)
- Body (JSON):{"model": "gpt-4o","temperature": 0.2,"messages": [{ "role": "user", "content": "Prompt: {{$json.prompt}}\nContext: {{$json.context}}" }]}
- Response: JSON. The content path is typically:
choices[0].message.content.
- Method:
(Optional) IF node: Check for errors or empty responses.
(Optional) Code / Function node: Parse structured sections, extract JSON with regex, convert to n8n fields.
Pros: Works with any AI API (OpenAI, Azure OpenAI, Anthropic, local models behind an HTTP endpoint)\ Cons: You must handle headers, body schema, rate limits, and errors
Example: Ready-to-import workflow (JSON)
What this does:
- Takes a prompt and context (could be logs or ticket text).
- Calls the OpenAI Chat API via HTTP Request.
- Extracts the assistant’s message and returns it.
Copy and import into n8n (top-right menu → Import from file).\ Replace the credential placeholder with your credential name or set the header inline with your key for testing.
Notes:
- For production, swap
{{ $env.OPENAI_API_KEY }}with a proper n8n Credential reference (e.g., an OpenAI credential). - If you’re using Azure OpenAI, the URL and headers differ (endpoint per resource,
api-keyheader,api-versionquery).
Approach C (Optional): Build a Custom Community Node (Pro‑code)
If you want a reusable, shareable “AI node” with curated parameters (model, temperature, prompt templates), create a Community Node in TypeScript.
High-level steps:
- Scaffold a community node repo (n8n expects a specific structure).
- Define credentials (API key).
- Implement a node description (parameters, UI).
- Implement the execute method to call the provider API and return normalized output.
- Build & publish; install in n8n via Community Nodes.
Minimal TypeScript skeleton (AIChat.node.ts):
Bearer </span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">{{responseHTML}}lt;/span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">{</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">apiKey</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">}</span><span attribution="{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","userInfo":{"name":"Copilot","oid":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C","id":"E64C3D4F-5E12-4514-AD9B-893A6FAFD00C"},"timestamp":1767184200000,"dataSource":0}">,Credential file (example): OpenAiApi.credentials.ts
This is a simplified illustration. In a real project:
- Use
node-fetchor built‑in HTTP helpers per n8n guidelines.- Add retries/backoff, error normalization, rate-limit handling.
- Support Response Format = JSON with schema validation.
- Consider Azure OpenAI variants (base URL, api-version).
Designing better prompts & outputs (for Ops/Network use-cases)
- System message: Set clear role/constraints—e.g., “You are a network ops assistant. Prefer bullet points and short sections; return JSON keys:
summary,root_causes,confidence.” - User/content: Feed structured context: timestamps, device names, severity, metrics (latency/packet loss), prior incidents.
- Output schema: Ask for JSON and then parse/validate via Function node:// Example: Extract JSON block from AI outputconst text = $json.ai_output || '';const match = text.match(/\{[\s\S]*\}/);return [{ json: match ? JSON.parse(match[0]) : { raw: text } }];
- Guardrails: Use temperature
0–0.3, add “If uncertain, say ‘insufficient data’” to reduce hallucinations.