🔧 Tools & Platforms

n8n AI Automation:
Self-Hosted Workflows for Beginners

n8n gives you unlimited AI automation workflows at the cost of a $5 server. No per-task fees, no operation limits, no vendor lock-in, and all your data stays on your own infrastructure. This guide takes you from zero to a running n8n instance with your first AI workflow in an afternoon.

🔧 n8nSelf-Hosted·By ThinkForAI Editorial Team·Updated November 2024·~22 min read
The economics of self-hosting n8n are hard to argue with: Make.com's Core plan ($9/month, 10,000 operations) handles roughly 2,000–2,500 items per month through a 4-step AI workflow. n8n self-hosted on a Hetzner CX11 server ($4.50/month) handles unlimited items — no operations ceiling. For anyone running automations at meaningful volume, the 90-minute setup investment pays back within the first week of operation. For anyone who cares about data privacy, the fact that no data leaves your infrastructure is worth the setup effort alone.
Sponsored

What is n8n and why does self-hosting matter?

n8n (pronounced "n-eight-n", standing for "nodemation") is an open-source workflow automation platform. Like Make.com and Zapier, it lets you build automated workflows that connect apps and services. Unlike those platforms, n8n's source code is publicly available and you can run it on your own server — which means no per-operation fees, no monthly task limits, and full control over your data.

The n8n interface: similar to Make.com, more powerful

n8n uses a visual flow diagram interface similar to Make.com. Nodes (equivalent to Make.com modules) are connected by edges showing data flow. You build workflows by adding nodes, configuring their parameters, and connecting them. The primary difference is that n8n also supports JavaScript code nodes — you can insert a code block at any point in the workflow and write custom logic in JavaScript, which is executed server-side when the workflow runs. This makes n8n significantly more powerful than Make.com for complex logic without requiring you to build and deploy a separate application.

n8n's AI capabilities

n8n has dedicated AI nodes for the major providers: an OpenAI node (supporting Chat Completions, Assistants, and other OpenAI APIs), an Anthropic node (for Claude models), a Google Gemini node, and a Mistral node. It also has an "Agent" node — a higher-level abstraction that lets you build AI agents that can use tools and take multi-step actions. For most AI automation use cases, the OpenAI Chat Model node paired with a standard node chain is the right starting point. The Agent node becomes valuable when you want the AI to decide which tools to use and in what sequence.

n8n vs. Make.com vs. Zapier: key differences

Featuren8n self-hostedMake.com freeZapier Starter
Monthly cost~$5 (VPS only)Free$29.99
Operations/tasks limitUnlimited1,000 ops750 tasks
Trigger intervalInstant (webhook)15 min (free)Near-instant
Setup difficultyModerate (90 min)NoneNone
Code nodesYes (JavaScript)NoNo (except Pro)
Data stays on your serverYesNoNo
AI model nodesOpenAI, Claude, Gemini, MistralOpenAI, Claude, GeminiOpenAI, Claude, Gemini
Integration count400+ native nodes~1,5006,000+

Who should self-host n8n

n8n self-hosting is the right choice if: (1) you are comfortable with basic server management — SSH, running Docker commands, managing a VPS; (2) you run or plan to run automations at significant volume where Make.com's operation limits become a real constraint; (3) data privacy is important — your automations process sensitive client data, financial records, health information, or anything else you would rather not route through a third-party SaaS platform; or (4) you want the ability to write custom JavaScript logic within your workflows without paying for a professional plan on hosted platforms.

If the server setup sounds intimidating, start with Make.com and return to n8n when you are ready. The skills you build on Make.com translate directly to n8n — the concepts are the same; only the setup and the code-node capability differ.

Self-hosting n8n: complete VPS setup in 90 minutes

Here is the exact process I use to get n8n running on a fresh VPS. These instructions assume Ubuntu 22.04 LTS, which is the most widely used Linux distribution for this purpose and the one with the best-documented troubleshooting resources.

Before you start

You need: a domain name (e.g., n8n.yourdomain.com — free subdomain from your existing domain works perfectly), a Cloudflare account (free), a Hetzner or DigitalOcean account for the VPS, and a terminal application (Terminal on Mac, PowerShell on Windows, any terminal on Linux). Budget approximately 90 minutes for first-timers; 30 minutes for people comfortable with servers.

1
Create the VPS

Go to hetzner.com/cloud and create an account. In the Cloud Console, click "New Server." Select: Location = Helsinki or Frankfurt (lowest latency for most users), OS = Ubuntu 22.04, Type = CX11 (1 vCPU, 2GB RAM — sufficient for n8n at typical small business volumes). Add your SSH public key for authentication (or use a root password if you prefer). Click "Create and Buy Now." Cost: approximately €3.79/month.

2
Point your subdomain to the server

Copy the server's IP address from the Hetzner console. In Cloudflare, go to your domain's DNS settings. Add an A record: Name = n8n (creating n8n.yourdomain.com), Target = your server IP, Proxy = orange cloud (enabled). This routes traffic through Cloudflare's CDN and enables automatic HTTPS — you do not need to configure SSL certificates manually.

3
SSH into the server and install Docker

Open your terminal. Run: ssh root@YOUR_SERVER_IP. Once connected, install Docker with: curl -fsSL https://get.docker.com | sh. This takes 2–3 minutes. Verify installation: docker --version should show a version number.

4
Run n8n with Docker

Run the following command, replacing YOUR_DOMAIN with your actual subdomain (e.g., n8n.yourdomain.com):

docker run -d \
  --name n8n \
  --restart always \
  -p 5678:5678 \
  -v ~/.n8n:/home/node/.n8n \
  -e N8N_HOST=YOUR_DOMAIN \
  -e N8N_PORT=5678 \
  -e N8N_PROTOCOL=https \
  -e WEBHOOK_URL=https://YOUR_DOMAIN \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=change_this_password \
  n8nio/n8n
5
Configure Cloudflare to proxy to your server port

In Cloudflare for your domain, go to Network → Enable "WebSockets." Then go to Rules → Create a new Page Rule for your n8n subdomain (n8n.yourdomain.com/*) and set it to "Disable Performance." This ensures Cloudflare correctly proxies all n8n traffic including real-time connections.

6
Access n8n and create your account

Wait 2–3 minutes for DNS propagation, then navigate to https://n8n.yourdomain.com in your browser. You will be prompted for the basic auth credentials you set in step 4. After authenticating, n8n's setup wizard walks you through creating your admin account. Once complete, you are in n8n's workflow editor — a clean visual canvas ready for building.

7
Test with a simple workflow

Click "New Workflow." Click "Add first step." Search "Schedule Trigger" and add it. Set it to run every minute. Connect a "Set" node (just outputs a fixed value). Click "Execute Workflow" to test. If you see a successful execution in the workflow history, everything is working correctly. Delete this test workflow.

Total cost of a self-hosted n8n instance

VPS (Hetzner CX11): €3.79/month (~$4.10). Domain: $0 if using a subdomain of your existing domain. Cloudflare: $0 (free plan sufficient). n8n software: $0 (open source). OpenAI API: pay-per-use, typically $5–$20/month for small business automation volumes. Total for a fully functional, unlimited AI automation infrastructure: approximately $9–$25/month depending on API usage. Compare to Make.com Core ($9/month, 10,000 ops limited) + API costs for the same outcome.

Building your first n8n AI workflow

Now that n8n is running, here is how to build an email classification workflow — the same use case covered in the Make.com and Zapier guides, allowing direct comparison of the three approaches.

Connecting Gmail to n8n

In n8n, click "New Workflow." Click "Add first step." Search "Gmail" and select "Gmail Trigger." Click "Create New Credential." Select "OAuth2." n8n provides a callback URL — copy it. In the Google Cloud Console, create an OAuth 2.0 Client ID, add the n8n callback URL as an authorized redirect URI, and download the client credentials JSON. Back in n8n, paste the Client ID and Client Secret, then click "Connect" to complete the OAuth flow. Select the trigger event "Message Received" and configure it to watch your inbox.

The Gmail OAuth setup takes 15–20 minutes the first time and is the most complex part of the n8n setup. Once done, it is stored as a reusable credential that all your Gmail-connected workflows can use.

Adding the OpenAI node

After the Gmail trigger, click the "+" button to add a new node. Search "OpenAI" and select the OpenAI node. Click "Create New Credential," add your OpenAI API key, and save. In the node configuration: Resource = "Chat," Operation = "Message," Model = "gpt-4o-mini." In the Messages section, add a system message with your classification prompt and a user message mapping the Gmail trigger's email body and subject. Save the node.

Adding conditional routing with a Switch node

After the OpenAI node, add an "n8n" → "Switch" node. The Switch node evaluates the OpenAI response and routes to different branches based on its value. Configure conditions: if OpenAI output contains "BILLING" → Branch 1; if contains "SUPPORT" → Branch 2; if contains "SALES" → Branch 3; default → Branch 4. Connect different Gmail "Add Label" nodes or Slack notification nodes to each branch.

Adding the logging node

On each branch (or after the Switch via a Merge node), add a Google Sheets "Append or Update Row" node to log the classification. Map: timestamp = {{$now}}, email subject = Gmail trigger subject, category = OpenAI response output, email from = Gmail trigger from email.

Activating the workflow: Toggle the workflow to "Active" using the toggle in the top right. n8n will now trigger automatically when new emails arrive in Gmail. Unlike Make.com's 15-minute polling, Gmail triggers in n8n use the Gmail push notification API — emails are processed within seconds of arriving, not minutes.

n8n's AI nodes in depth: chains, agents, and memory

n8n has a more sophisticated set of AI-specific nodes than Make.com or Zapier, reflecting its focus on power users. Understanding these nodes helps you build more capable AI workflows.

The Basic LLM Chain node

The simplest AI workflow node — a prompt in, a response out. Connect it to any LLM node (OpenAI Chat Model, Anthropic Claude Model, etc.) and configure the system prompt. For straightforward classification and generation tasks, this is all you need and it is equivalent to what you do in Make.com with the OpenAI "Create a Completion" module.

The AI Agent node

A significantly more capable node that allows the AI to decide which tools to use and in what sequence to complete a task. You configure the agent with: a system prompt defining its role and goal, a set of tools it can use (other n8n nodes, HTTP requests, functions you define), and an LLM for reasoning. When the agent runs, it reasons about the task, calls tools as needed, evaluates the results, and continues until it has accomplished the goal or determined it cannot. This is the foundation for building genuinely agentic AI workflows — systems that plan and execute rather than just executing a fixed sequence.

Memory nodes

n8n provides several memory nodes that allow AI workflows to maintain context across runs: a Window Buffer Memory node (remembers the last N messages in a conversation), a Postgres Chat Memory node (stores conversation history in a PostgreSQL database for persistent long-term memory), and a Zep Memory node (integrates with the Zep memory management platform). These are valuable for building AI workflows that feel conversational — where context from previous interactions informs current responses.

The Vector Store nodes

n8n has native integration with several vector databases — Pinecone, Supabase pgvector, Qdrant, Weaviate — through dedicated nodes. This makes it possible to build proper RAG (Retrieval-Augmented Generation) pipelines entirely within n8n without writing code: a Vector Store Retriever node performs semantic search against your knowledge base, the retrieved content is passed to the AI node as context, and the AI generates a grounded response. This is a significant capability advantage over Make.com and Zapier, which require external tools or custom code to implement RAG.

5 practical n8n AI workflow examples

Workflow 1: Instant email classification and response (using Gmail push)

Unlike Make.com's 15-minute polling, n8n's Gmail trigger uses push notifications — new emails trigger the workflow within seconds. Node chain: Gmail Trigger → OpenAI (classify) → Switch (route by category) → [Gmail Label + Gmail Create Draft (for response categories)] + [Google Sheets Log]. This is the same workflow that takes 15+ minutes to process in Make.com's free tier, running in seconds in n8n.

Classification system prompt (same as Make.com guide but tuned for n8n's expression syntax)
You are an email triage specialist. Classify this email into exactly one category.

CATEGORIES: BILLING | SUPPORT | SALES | SCHEDULING | PERSONAL | SPAM | OTHER

Return a JSON object: {"category": "CATEGORY", "urgency": 1-5, "summary": "max 10 words"}
Return ONLY the JSON. No other text.

Workflow 2: AI agent for research and briefing

An AI Agent workflow that receives a company name via webhook, searches for recent news using the SerpAPI or Brave Search tool, retrieves the company's LinkedIn page via HTTP Request, synthesises the information into a structured briefing document, and saves it to Notion or Google Drive. This multi-step agent workflow — where the AI decides what information to gather and how to synthesise it — is difficult to build in Make.com without code but straightforward with n8n's Agent node.

Workflow 3: Customer FAQ responder with RAG

Node chain: Webhook trigger (receives customer question) → Embeddings (convert question to vector) → Pinecone Vector Store Retriever (find relevant FAQ chunks) → AI Node with retrieved context → HTTP Response (return the grounded answer). This is a proper RAG pipeline in n8n, keeping all data on your infrastructure and achieving significantly lower hallucination rates than prompting a model without knowledge base retrieval.

Workflow 4: Automated invoice processing with validation

Node chain: Gmail Trigger (watches for email with PDF attachments) → Extract Binary Data → HTTP Request to OpenAI Vision API (extract structured data from invoice image) → Code Node (validate extracted JSON against schema, flag low-confidence items) → Google Sheets (log valid items) → Gmail (send exception report for flagged items). The Code Node here handles the validation logic in clean JavaScript rather than forcing it through visual filter modules.

// n8n Code Node: Validate invoice JSON
const invoice = JSON.parse($input.first().json.choices[0].message.content);
const required = ['supplier_name','invoice_number','invoice_date','total_amount'];
const missing = required.filter(f => !invoice[f]);

return [{
  json: {
    ...invoice,
    is_valid: missing.length === 0 && invoice.confidence !== 'low',
    validation_errors: missing,
    needs_review: missing.length > 0 || invoice.confidence === 'low'
  }
}];

Workflow 5: Daily AI news digest with personalisation

Scheduled trigger (daily 7am) → HTTP Request to multiple RSS feeds (TechCrunch, The Verge, Wired, industry-specific sources) → Code Node (parse RSS XML, extract items from last 24 hours) → OpenAI (filter for relevance to your specific interests, summarise remaining items) → Gmail (send personalised digest). The Code Node for RSS parsing is a 10-line JavaScript function that handles what would take 5–6 Make.com modules to achieve.

Maintaining your n8n instance: the monthly checklist

Self-hosting requires some ongoing maintenance — not much, but enough to keep your instance secure and running reliably. Here is the monthly checklist that keeps a well-run n8n instance healthy with about 20 minutes of attention per month.

Monthly maintenance tasks

  • Update n8n to the latest version. Run: docker pull n8nio/n8n && docker stop n8n && docker rm n8n, then re-run your original docker run command. n8n typically releases updates bi-weekly. Check n8n's GitHub releases page for breaking change notices before updating major versions.
  • Check disk usage. Run: df -h on the server. n8n stores execution history which can grow. If disk is above 70%, either increase your VPS storage tier or configure n8n to prune old execution data (set N8N_EXECUTIONS_DATA_MAX_AGE=168 — keeps 7 days of history).
  • Review error executions. In n8n's execution history, filter for "Error" status and review any recurring failures. A workflow consistently failing on the same input type needs prompt attention.
  • Verify backup. Your n8n data is stored in ~/.n8n on the server. Confirm your backup system is running — either Hetzner's managed backups ($0.60/month) or a manual rsync to a backup location.
  • Check API costs. Review your OpenAI and other API spending in their respective dashboards. Unexpected cost increases indicate either increased automation volume (good) or a runaway loop or prompt length inflation (needs investigation).

The backup you must not skip

The ~/.n8n directory contains all your workflow definitions, credentials, and execution history. If your server fails without a backup, you lose everything and must rebuild from scratch. Hetzner's server backups cost €0.74/month and create daily snapshots automatically — enable them immediately after setting up your server. This is non-negotiable. A $0.74/month backup subscription has prevented multiple instances of what would have been catastrophic data loss.

Frequently asked questions about n8n

Do I need to know Linux to self-host n8n?

You need to be comfortable with basic terminal commands: SSH to connect to a server, running Docker commands to install and manage n8n, and navigating the filesystem to check logs or disk usage. You do not need deep Linux expertise — the commands involved are well-documented and largely copy-paste. Most people who have never used a server before get n8n running on their first attempt by following the exact steps above. The main prerequisite is being comfortable reading error messages and searching for solutions when something does not work exactly as expected.

What happens if my VPS server goes down?

Hetzner and DigitalOcean offer 99.9%+ uptime SLAs — server outages are rare. When they do occur: n8n is configured with --restart always in the Docker command, which means it automatically restarts whenever the server restarts. For trigger-based workflows, events that occurred during downtime may be missed (emails that arrived while n8n was down will not be processed retroactively by most trigger types). For scheduled workflows, the schedule simply resumes when n8n comes back online. For critical business automation requiring high availability, consider a managed database for n8n's data storage and a load-balanced setup — though this is overkill for most use cases.

Can I use Ollama with n8n to run AI models locally?

Yes — this is one of n8n's most powerful combinations. Install Ollama on the same server as n8n (or a separate server with a GPU for better performance), pull a model like Llama 3.1 8B or Mistral 7B, and in n8n use the "Ollama Chat Model" node or a generic HTTP Request node to call Ollama's API at http://localhost:11434/api/chat. This gives you completely free AI inference with no per-token costs and no data leaving your infrastructure. Model quality is lower than GPT-4o for complex tasks but adequate for many classification and extraction use cases, especially at scale where API costs become significant.

Is n8n suitable for a team, or just for individuals?

n8n self-hosted supports multiple user accounts and role-based access control on the Enterprise plan. The Community edition (what you install following this guide) supports a single user account by default but can be configured for multiple users. For teams of 2–5 people sharing automations, many teams use a shared admin account. For teams needing proper multi-user access, credential isolation per team member, and audit logging, n8n's Enterprise plan is the appropriate option — though it requires contacting n8n for pricing. For individual practitioners and small businesses, the Community edition is fully adequate.

Sponsored

Start building unlimited AI automation

The complete AI automation guide covers every tool and architecture — from no-code starting points through self-hosted setups like n8n to custom Python pipelines.

Read the Complete AI Automation Guide →

ThinkForAI Editorial Team

All n8n setup instructions and configurations verified on Ubuntu 22.04 with n8n v1.x. VPS pricing as of November 2024 — check provider sites for current rates.

Sponsored