Infrastructure12 min

n8n(Self-Hosted)

Self-host n8n with Docker. Full control over your workflows, credentials, and execution history. Free, no feature restrictions.

n8n is a visual workflow automation platform. The self-hosted community edition is completely free with zero feature restrictions -- same nodes, same AI capabilities, same everything as the paid plans. You're just running it yourself.

Don't want to manage infrastructure? Check the n8n Cloud guide instead. Sign up and you're running in under 5 minutes, no Docker needed.

Why Self-Host

  • Free. No per-workflow limits, no execution caps, no feature gates.
  • Your data stays on your machine. Credentials, execution logs, workflow history -- all local.
  • Full environment control. Pin versions, customize the database, integrate with other services on the same network.
  • No vendor dependency. If n8n the company disappeared tomorrow, your instance keeps running.

The tradeoff: you handle updates, backups, and uptime yourself.

Prerequisites

  • Docker and Docker Compose installed
  • A Linux server or local machine (1 GB RAM minimum, 2 GB recommended)
  • A domain name if you want HTTPS access (optional but recommended for webhooks)

Step 1: Basic Docker Setup

The quickest way to get n8n running:

docker run -d --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

Open http://localhost:5678, create your owner account, and you're in.

This works, but it uses SQLite (the default) and loses data if you accidentally remove the container without the volume. For anything beyond testing, use Docker Compose with PostgreSQL.

Step 2: Production Setup with Docker Compose

Create a project directory:

mkdir -p ~/n8n && cd ~/n8n

Create a .env file:

# Database
POSTGRES_USER=n8n
POSTGRES_PASSWORD=changeme_use_a_real_password
POSTGRES_DB=n8n

# n8n
N8N_PORT=5678
N8N_ENCRYPTION_KEY=generate-a-random-string-here
WEBHOOK_URL=https://n8n.yourdomain.com/

The N8N_ENCRYPTION_KEY encrypts stored credentials. Generate something random (openssl rand -hex 32) and keep it safe. If you lose it, all saved credentials become unreadable.

Create docker-compose.yml:

services:
  postgres:
    image: postgres:17
    container_name: n8n-postgres
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  n8n:
    image: n8nio/n8n:1.87.2
    container_name: n8n
    ports:
      - "${N8N_PORT:-5678}:5678"
    environment:
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: ${POSTGRES_DB}
      DB_POSTGRESDB_USER: ${POSTGRES_USER}
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
      WEBHOOK_URL: ${WEBHOOK_URL}
      GENERIC_TIMEZONE: America/Chicago
      N8N_DEFAULT_BINARY_DATA_MODE: filesystem
    volumes:
      - n8n-data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
    restart: unless-stopped

volumes:
  postgres-data:
  n8n-data:
docker compose up -d

Wait 15-20 seconds for PostgreSQL to initialize, then open http://localhost:5678.

Step 3: Reverse Proxy (Optional)

If you want HTTPS and a clean URL (required for external webhooks), put a reverse proxy in front. With Caddy:

# Caddyfile
n8n.yourdomain.com {
    reverse_proxy localhost:5678
}

With Nginx:

server {
    listen 443 ssl;
    server_name n8n.yourdomain.com;

    location / {
        proxy_pass http://localhost:5678;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (needed for the editor)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Make sure WEBHOOK_URL in your .env matches the public URL, including the protocol. Webhook triggers won't work if this is wrong.

Step 4: Verify Everything

  1. Log in and create a test workflow
  2. Add a Schedule Trigger node (set to every minute for testing)
  3. Connect it to a Set node that outputs {"test": "working"}
  4. Activate the workflow
  5. Wait a minute, check executions -- you should see a successful run

If you set up a reverse proxy, also test webhooks:

  1. Add a Webhook trigger node to a new workflow
  2. Activate it
  3. Hit the webhook URL from your browser or curl -- you should get a response

Updating n8n

Pin the version in your docker-compose.yml (like n8nio/n8n:1.87.2 above). When you want to update:

# Change the version tag in docker-compose.yml, then:
docker compose pull n8n
docker compose up -d n8n

Read the release notes before updating. Breaking changes happen, especially around AI nodes which are still evolving fast.

Configuration Notes

  • Version: Use 1.80+ minimum. AI Agent and LLM Chain nodes have changed significantly in recent versions. Some tutorials written for older versions won't match the current UI.
  • Cron trigger gotcha: After activating a scheduled workflow via the API (not the UI toggle), restart the n8n container and toggle the workflow off/on. The cron scheduler sometimes doesn't register until you do this.
  • Code node limitations: Code nodes run in a sandboxed VM. fetch(), DOMParser, and npm packages are not available. Use HTTP Request nodes for any external calls.
  • Binary data: Set N8N_DEFAULT_BINARY_DATA_MODE=filesystem (not the default default mode) if your workflows handle files. The default mode stores binary data in the database, which bloats fast.
  • Execution pruning: n8n keeps execution logs indefinitely by default. Set EXECUTIONS_DATA_MAX_AGE=168 (hours) to auto-prune old executions, or they'll slowly eat disk space.
  • Backups: Back up the PostgreSQL database and the n8n-data volume. The database has your workflows and execution history. The volume has credentials (encrypted) and binary files.

Troubleshooting

Webhook URL returns 404 -- WEBHOOK_URL doesn't match your actual public URL, or the workflow isn't activated. Webhook triggers only respond when the workflow is active.

"Error initializing database" on startup -- PostgreSQL isn't ready yet. Add the depends_on with condition: service_healthy like in the Compose file above. If it still fails, check that the database credentials match between the Postgres and n8n service configs.

Workflows stop triggering after a few days -- The n8n container might have run out of memory. Check docker stats n8n. Set NODE_OPTIONS=--max-old-space-size=2048 in the environment if you're running a lot of concurrent workflows.