Build a GitHub Release Watcher with n8n and Claude: Complete Tutorial (43 Nodes, 4 Channels, $0.02/Day)
Every self-hosted stack grows until you can't remember what version anything is running. Vaultwarden puts out a security fix, Immich ships a new feature, Traefik patches a TLS bug — and you find out three weeks late because you forgot to check.
We built this workflow to fix that problem. It runs once a day, checks every repo and container registry in your stack for new releases, feeds each changelog through Claude Haiku for a quick analysis (is this a security fix? does it break anything? should I update now or later?), and sends you a color-coded digest on whichever channels you actually check — Discord, Telegram, Slack, or ntfy push notifications to your phone.
This is the third workflow in our digest trilogy, alongside the AI News Digest and the SSL Certificate Watcher. All three follow the same pattern: scheduled trigger, API fetching, AI analysis, multi-channel delivery. If you've built either of those, you'll recognize the bones here.
The full backstory — why we built it, what broke, what we'd change — is in the build log. This article is purely instructional: here's every node, here's what it does, here's how to configure it.
Architecture Overview
The workflow moves through eight stages. Here's the high-level flow:
Schedule Trigger (daily 8 AM)
│
▼
Configure Watcher ──► Build Repo Watchlist (manual repos)
│ │
▼ ▼
Auto-Detect Enabled? ──► Parse docker-compose.yml
│ │
│ Merge Manual + Auto
│ │
│ Deduplicate Watchlist
│ │
▼ ▼
Source Router
┌────────┴────────┐
▼ ▼
Fetch GitHub Releases Fetch Registry Tags
│ │
Extract Release Data Extract Registry Data
│ │
└──► Merge All Sources
│
Compare Versions
│
Has Updates? ──► (no) stop
│
Prep Changelog ──► Claude Haiku AI
│
Parse AI Response
┌────┴────┐
▼ ▼
Save to DB Apply Alert Rules
│
Urgency Router
┌──────┴──────┐
▼ ▼
Format Instant Format Digest
│ │
└──► Channel Router
┌──┬──┬──┬──┐
▼ ▼ ▼ ▼ ▼
DC TG SL NF Update Versions
Stages: Trigger --> Config --> Watchlist Building --> Release Detection --> Version Comparison --> AI Analysis --> Alert Routing --> Multi-Channel Delivery.
43 functional nodes. 8 sticky notes for documentation. One workflow that replaces checking GitHub manually.
Before You Start
You'll need a few things set up before building this workflow:
| Requirement | Guide | Required? |
|---|---|---|
| n8n instance (self-hosted) | n8n setup guide | Yes (or n8n Cloud) |
| Anthropic API key | Get your API key, Claude setup for n8n | Yes |
| At least one delivery channel | Discord webhook, Slack bot token, Telegram bot, or ntfy topic | Yes |
| GitHub personal access token | Setup guide | No (but recommended) |
| PostgreSQL database | PostgreSQL guide | No (for release history tracking) |
Without a GitHub token, you're limited to 60 API requests per hour. With one, you get 5,000. If you're monitoring more than a handful of repos, get the token.
For a general rundown of shared prerequisites across our tutorials, see the prerequisites guide.
Phase 1: Trigger and Configuration
Schedule Trigger
Start with a Schedule Trigger node (n8n-nodes-base.scheduleTrigger, typeVersion 1.2). This fires the workflow on a cron schedule.
Set the cron expression to:
0 8 * * *
That's daily at 8:00 AM. n8n uses the workflow's timezone setting, so make sure your workflow settings have the right timezone (we use America/Chicago). If you want it at a different time — say, 6:30 AM — the expression would be 30 6 * * *.
One thing worth noting: if you're on n8n self-hosted, the cron fires based on when the n8n process started and when it internally registers the schedule. After deploying, you may need to deactivate and reactivate the workflow for the cron to register properly. On n8n Cloud, this just works.
Configure Watcher
Connect the Schedule Trigger to a Set node (n8n-nodes-base.set, typeVersion 3.4) named "Configure Watcher". This is your single control panel for the entire workflow. Every setting lives here so you never have to dig through Code nodes to change a URL.
The node has 14 fields:
| Field | Type | Default | What it does |
|---|---|---|---|
github_token | String | "" (empty) | GitHub personal access token (how to get one). Empty string works fine for small watchlists (fewer than 60 repos), but you'll hit rate limits fast without one |
enable_auto_detect | Boolean | true | Whether to auto-build a watchlist by parsing your docker-compose.yml |
compose_url | String | "" | Raw URL to your docker-compose.yml (e.g., https://raw.githubusercontent.com/youruser/homelab/main/docker-compose.yml) |
compose_content | String | "" | Alternative: paste your entire docker-compose.yml as a string. Use this OR compose_url, not both |
enable_discord | Boolean | true | Send notifications to Discord |
discord_webhook_url | String | YOUR_WEBHOOK_URL | Your Discord webhook URL (how to get one) |
enable_telegram | Boolean | false | Send notifications to Telegram |
telegram_chat_id | String | YOUR_CHAT_ID | Your Telegram chat ID (setup guide) |
enable_slack | Boolean | false | Send notifications to Slack |
slack_channel_id | String | YOUR_CHANNEL_ID | Slack channel ID (setup guide) |
enable_ntfy | Boolean | false | Send push notifications via ntfy.sh |
ntfy_topic | String | release-watcher | Your ntfy topic name (setup guide). Pick any unique string — anyone who knows it can subscribe |
default_channels | String | discord | Comma-separated list of channels for repos without per-repo overrides (e.g., discord,telegram) |
default_instant_alert | Boolean | false | When false, only critical/security updates send immediately. Everything else batches into one digest. Set true to send every update the moment it's detected |
Replace the placeholder values (YOUR_WEBHOOK_URL, YOUR_CHAT_ID, YOUR_CHANNEL_ID) with your actual values. You only need to fill in the channels you're actually using.
Phase 2: Building the Watchlist
The Configure Watcher node feeds two parallel paths: a manually defined repo list and an optional auto-detection branch. Both merge together and get deduplicated before any API calls happen.
Manual Watchlist
The Build Repo Watchlist node is a Code node (n8n-nodes-base.code, typeVersion 2) where you define exactly which repos to monitor. The structure is an array of objects:
const repos = [
{
owner: 'immich-app',
repo: 'immich',
label: 'Immich',
category: 'media',
source: 'github',
dependsOn: ['postgres', 'redis']
},
{
owner: 'dani-garcia',
repo: 'vaultwarden',
label: 'Vaultwarden',
category: 'security',
source: 'github',
dependsOn: []
},
{
owner: 'traefik',
repo: 'traefik',
label: 'Traefik',
category: 'networking',
source: 'github',
dependsOn: []
},
// ... add your repos here
];
Each field:
- owner / repo: The GitHub
owner/repopath. Forhttps://github.com/n8n-io/n8n, that'sowner: 'n8n-io'andrepo: 'n8n'. - label: Display name in your notifications. Can be anything you want.
- category: Grouping label for your own organization (
media,security,monitoring,networking,management, etc.). Purely cosmetic. - source: Always
'github'for GitHub repos. - dependsOn: Array of dependency names. If Immich depends on
postgres, and PostgreSQL also gets an update, the notification will mention that Immich depends on it. Helpful when coordinating multi-service updates.
For container registries (Docker Hub images that don't have GitHub releases), add them to a separate array:
const registries = [
{
registry: 'dockerhub',
namespace: 'linuxserver',
image: 'heimdall',
label: 'Heimdall (Docker)',
category: 'management',
source: 'dockerhub',
dependsOn: []
},
{
registry: 'dockerhub',
namespace: 'linuxserver',
image: 'wireguard',
label: 'WireGuard (Docker)',
category: 'networking',
source: 'dockerhub',
dependsOn: []
},
];
The Code node also defines alertOverrides for per-repo notification routing:
var alertOverrides = {
'dani-garcia/vaultwarden': {
urgencyOverride: null,
channels: ['discord', 'telegram', 'ntfy'],
instantAlert: true
},
'traefik/traefik': {
urgencyOverride: null,
channels: ['discord', 'telegram'],
instantAlert: true
},
};
This means: when Vaultwarden or Traefik release anything, skip the daily digest and send an immediate notification to all three channels. You probably want this for security-sensitive software. Everything else falls back to the defaults from Configure Watcher.
Auto-Detection Branch
The second path from Configure Watcher goes to an IF node (n8n-nodes-base.if, typeVersion 2.3) named "Auto-Detect Enabled?" that checks whether enable_auto_detect is true.
If auto-detect is off, a Skip Auto-Detect Set node passes through a marker item and connects to the merge.
If auto-detect is on, another IF node — Has Compose URL? — checks whether you provided a URL. If yes, Fetch Compose File (HTTP Request, typeVersion 4.2) downloads your docker-compose.yml as plain text. If no URL but you pasted content into compose_content, it flows directly to the parser.
The Parse and Map Compose Code node is where the magic happens. It extracts every image: line from the YAML using regex, then looks up each image against an internal mapping table of 25+ common homelab services:
- Traefik, Immich, Vaultwarden, Grafana, Prometheus
- Jellyfin, Home Assistant, Pi-hole, Portainer, Gitea
- Nextcloud, Caddy, Nginx, Uptime Kuma, Watchtower
- Frigate, WireGuard, Heimdall, AdGuard Home, Authelia
- code-server, and common base images like PostgreSQL and Redis
Known images get mapped to their GitHub repo for full release note tracking. Unknown images fall back to Docker Hub tag monitoring — you still get notified of new versions, just without the changelog analysis.
The image matching is not just simple string comparison. The node strips common prefixes like ghcr.io/, docker.io/, and library/ before matching, so ghcr.io/immich-app/immich-server and immich-app/immich-server both resolve to the same GitHub repo. Here's a condensed version of the lookup table:
var imageToRepo = {
'vaultwarden/server': { owner: 'dani-garcia', repo: 'vaultwarden', ... },
'traefik': { owner: 'traefik', repo: 'traefik', ... },
'n8nio/n8n': { owner: 'n8n-io', repo: 'n8n', ... },
'grafana/grafana': { owner: 'grafana', repo: 'grafana', ... },
'prom/prometheus': { owner: 'prometheus', repo: 'prometheus', ... },
'louislam/uptime-kuma': { owner: 'louislam', repo: 'uptime-kuma', ... },
'homeassistant/home-assistant': { owner: 'home-assistant', repo: 'core', ... },
'portainer/portainer-ce': { owner: 'portainer', repo: 'portainer', ... },
'authelia/authelia': { owner: 'authelia', repo: 'authelia', ... },
// ... 25+ entries total
};
If you use a service that isn't in the lookup table, the node creates a Docker Hub watcher for it automatically. You'll get notified when a new tag appears, just without changelog details. To add a missing mapping, edit the imageToRepo object in the Parse and Map Compose code.
(A quick aside: the regex approach to parsing YAML is not something we're proud of. A proper YAML parser would be better. But n8n Code nodes run in a restricted sandbox without access to npm packages, so regex it is. It handles standard image: name:tag lines, ignores comments, and works for every docker-compose file we've thrown at it. If you have a compose file with multi-line YAML anchors or some unusual templating, it might miss an image. Good enough for real-world use.)
Merge and Deduplicate
The Merge Manual + Auto node (Merge, typeVersion 3.2) combines both lists. Manual repos connect to input 0, auto-detected repos to input 1.
Then the Deduplicate Watchlist Code node removes overlaps. If the same repo appears in both your manual list and your docker-compose, the manual entry wins — it preserves your custom alert overrides and labels. The deduplication key is source:owner/repo for registry items and github:owner/repo for GitHub repos.
Phase 3: Release Detection
After deduplication, the watchlist hits the Source Router — a Switch node (n8n-nodes-base.switch, typeVersion 3.2) that splits items by their source field. GitHub repos go one way, Docker Hub and GHCR items go another.
GitHub Releases
GitHub items flow to Fetch Latest Release, an HTTP Request node (typeVersion 4.2) that hits:
https://api.github.com/repos/{owner}/{repo}/releases/latest
The node sends three headers:
User-Agent: n8n-release-watcher/1.0
Accept: application/vnd.github+json
Authorization: Bearer {token} (only if github_token is non-empty)
GitHub's API requires a User-Agent header — requests without one get rejected. The Authorization header is conditionally set: if you provided a token in Configure Watcher, it's included; if not, the header value is empty and GitHub treats it as an unauthenticated request.
Rate limits matter here. Without a token: 60 requests per hour. With a token: 5,000 per hour. If you're monitoring 10 repos, each run uses 10 requests. Without a token you can run roughly 6 times per hour before hitting the limit. Comfortable for a daily schedule, but tight if you're testing repeatedly.
The node has onError: "continueRegularOutput" set, so a single failed request (404 for a repo with no releases, 403 for rate limiting) won't kill the entire workflow.
Extract Release Data then parses the responses. This Code node does more than just pass data through — it applies several filters:
- Rate limit detection: If a response contains
"rate limit"in the message, that repo is skipped and a warning flag is set - Pre-release filtering: Releases with
prerelease: trueare silently skipped. We only want stable releases - Missing release handling: Repos that use only tags (no GitHub Releases) return no
tag_nameand get filtered out
The extracted fields for each valid release: tagName, releaseName, publishedAt, htmlUrl, and changelog (the full markdown body from the release notes).
If every single request was rate-limited, the node returns a specific warning item with the message "GitHub API rate limit exceeded. Add a GitHub token to the Repo Watchlist node for 5,000 requests/hour." This surfaces in the execution log so you know what happened instead of just seeing an empty run.
If some repos succeeded and some were rate-limited, a _rateLimitWarning flag gets attached to the first result so the downstream nodes can optionally include it in the digest.
Container Registry Tags
Registry items flow to Fetch Registry Tags (HTTP Request, typeVersion 4.2), which hits Docker Hub's tag API:
https://hub.docker.com/v2/repositories/{namespace}/{image}/tags?page_size=5&ordering=last_updated
For GHCR, it would hit https://ghcr.io/v2/{namespace}/{image}/tags/list.
Extract Registry Data parses the registry responses differently depending on the source:
- Docker Hub returns
{results: [{name, last_updated, ...}]}— the node finds the first tag that isn'tlatestand doesn't contain-rc,-beta, or-alpha - GHCR returns
{tags: ['v1.0', 'latest', ...]}— the node filters and sorts the tags, picking the highest version number
The output gets shaped to match GitHub release data: same fields (tagName, label, htmlUrl, etc.) so downstream nodes don't need to care about the source. Container registries don't provide changelogs, so the changelog field gets a placeholder: "Container image tag update. Check source repo for release notes." The AI analysis still runs on this, but the summary will be brief.
Both branches merge in Merge All Sources (Merge node, typeVersion 3.1) before flowing to version comparison. The merge uses the default append mode — all items from both inputs combine into a single stream.
Phase 4: Version Comparison
This is where the workflow decides whether anything is actually new.
The Compare Versions Code node uses n8n's workflow static data — a persistent key-value store that survives across executions:
const staticData = $getWorkflowStaticData('global');
For each release in the merged list, it:
- Builds a storage key (
owner/repofor GitHub,source:owner/repofor registries) - Normalizes the tag (strips leading
v, lowercases) - Compares against the stored tag
Three outcomes are possible:
First run: If staticData.initialized doesn't exist, this is a fresh workflow. Every current version gets seeded into static data, and the node returns a single _firstRun: true item. No alerts sent. This prevents a flood of notifications the first time you activate the workflow.
No updates: If every tag matches what's stored, the node returns _noUpdates: true. The workflow stops here.
New releases found: Any repo where the tag changed gets passed downstream with its previousTag attached for context.
The Has Updates? IF node (typeVersion 2.3) checks whether the output contains a tagName field. True path continues to AI analysis. False path ends the execution.
One subtlety: version normalization strips the leading v and lowercases everything, so v1.32.5 and V1.32.5 and 1.32.5 all compare as equal. Without this, you'd get false positives from repos that inconsistently prefix their tags.
The normalization function:
function normalizeTag(tag) {
if (!tag) return '';
return String(tag).trim().replace(/^v/i, '').toLowerCase();
}
Short and intentionally simple. It doesn't try to parse semver or compare version numbers — it just checks for string equality after normalization. A release from 1.32.4 to 1.32.5 is detected. A re-tag of the same version (which some projects do for build metadata changes) is not. That's the right tradeoff for a daily watcher.
Phase 5: AI Analysis with Claude
This is where each new release gets analyzed by Claude Haiku to determine urgency, detect breaking changes, and flag security issues.
Preparing the Changelog
Prep Changelog for AI truncates each changelog to 1,500 characters — but with a twist. Before truncating, it searches for a "BREAKING CHANGES" section using regex. If found, that section gets preserved and appended to the end even if it would have been cut off by the character limit:
let breakingSection = '';
const bMatch = log.match(
/(?:#+\s*)?(?:BREAKING|Breaking Changes?|\u26a0\ufe0f)[\s\S]*?(?=\n#{1,3}\s|\n\n---|\n\n\n|$)/i
);
if (bMatch) {
breakingSection = '\n\n--- BREAKING CHANGES ---\n' + bMatch[0].trim();
}
const maxLen = 1500 - breakingSection.length;
if (log.length > maxLen) {
log = log.substring(0, maxLen).trim() + '\n\n[...truncated]';
}
This matters because some projects (Immich, for example) write long changelogs where the breaking changes section sits near the bottom. Naive truncation would cut it off entirely, and Claude would report "no breaking changes" when there are.
Claude Haiku Configuration
The AI chain uses two connected nodes:
Claude Haiku — an lmChatAnthropic node (typeVersion 1.3) configured with:
- Model:
claude-haiku-4-5-20251001(the cheapest Claude model) - Temperature:
0.2(low creativity — we want consistent, factual analysis) - Max tokens:
1024(the response is a small JSON object)
You'll need to add your Anthropic API credential to this node. In n8n, go to Credentials > Add Credential > search "Anthropic" > paste your API key > save. Then open the Claude Haiku node and select the credential.
Analyze Release — a chainLlm node (typeVersion 1.7) that connects to Claude Haiku via the ai_languageModel connector. This is a Basic LLM Chain, not an AI Agent — it processes each item independently without conversation memory accumulation. (We use chainLlm instead of agent specifically because agents accumulate context across batch items, inflating token usage by 10x when processing multiple releases. Learned that one the hard way.)
The system prompt:
You are a self-hosted software update analyst. Summarize release notes
for a homelabber.
For each release, provide:
1. A 1-2 sentence summary of what changed (facts only, no marketing)
2. Whether there are BREAKING CHANGES (yes/no)
3. If breaking: what specifically breaks and migration steps
4. Update urgency: "critical" (security fix, CVE), "recommended"
(useful features/bugfixes), or "optional" (minor/cosmetic)
5. Security assessment: flag any CVEs, security patches, vulnerability
fixes, or security-related keywords
6. Top 3 most impactful changes as a brief list
7. Specific migration/upgrade steps needed (empty string if none)
Return ONLY valid JSON:
{
"summary": "1-2 sentence summary",
"breaking": false,
"breakingDetails": null,
"urgency": "recommended",
"security": false,
"securityDetails": null,
"keyChanges": ["change1", "change2", "change3"],
"migrationNotes": ""
}
Security rules:
- If changelog mentions CVE-XXXX-YYYY, set security:true and include
CVE numbers in securityDetails
- If changelog mentions "security fix", "vulnerability", "patch",
"XSS", "SQL injection", "auth bypass", set security:true
- Security issues automatically make urgency "critical" unless they
are minor/low-severity
Be concise. Homelabbers want facts, not marketing language.
The user prompt template feeds in the release data:
Software: {{ $json.label }} ({{ $json.owner }}/{{ $json.repo }})
Version: {{ $json.tagName }}{{ $json.previousTag ? ' (previous: ' + $json.previousTag + ')' : '' }}
Source: {{ $json.source || 'github' }}
Release date: {{ $json.publishedAt || 'unknown' }}
Release notes:
{{ $json.changelogTruncated }}
Parsing the AI Response
Parse AI Response extracts the JSON from Claude's response using a regex match for {...}. If parsing fails (Claude occasionally wraps JSON in markdown code fences), it falls back to a default object with the raw text as the summary and urgency: 'optional'.
One enforcement rule: if security is true but urgency isn't already critical, the node escalates it:
if (analysis.security === true && analysis.urgency !== 'critical') {
analysis.urgency = 'critical';
}
Security fixes are always critical. Period.
Cost: A typical run monitoring 10-15 repos where 2-3 have updates costs roughly $0.01 to $0.03 in API usage. Haiku runs at $0.25 per million input tokens and $1.25 per million output tokens. Even running daily for a month, you're looking at well under a dollar.
Phase 6: Alert Routing
After AI analysis, the workflow splits into a database storage path and an alert delivery path. Both run in parallel from the Parse AI Response output.
Saving to the Database
Prep DB Record builds a PostgreSQL INSERT statement for each release. Save to DB (Postgres node, typeVersion 2.5) executes it.
This is optional but recommended for long-term tracking. You'll need the release_history table. Run this migration on your PostgreSQL database:
CREATE TABLE IF NOT EXISTS release_history (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
repo_key VARCHAR(200) NOT NULL,
source VARCHAR(20) NOT NULL DEFAULT 'github',
tag_name VARCHAR(200) NOT NULL,
previous_tag VARCHAR(200),
release_url TEXT,
changelog_raw TEXT,
ai_summary TEXT,
ai_urgency VARCHAR(20),
has_breaking_changes BOOLEAN DEFAULT false,
breaking_details TEXT,
security_advisory BOOLEAN DEFAULT false,
advisory_severity VARCHAR(20),
update_command TEXT,
detected_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
notified_channels TEXT[] DEFAULT '{}',
metadata JSONB DEFAULT '{}'::jsonb
);
CREATE INDEX idx_release_history_repo
ON release_history(repo_key, detected_at DESC);
CREATE INDEX idx_release_history_urgency
ON release_history(ai_urgency);
CREATE INDEX idx_release_history_detected
ON release_history(detected_at DESC);
If you don't need the database history, you can delete the Prep DB Record and Save to DB nodes. The alert path works independently.
Applying Alert Rules
Apply Alert Rules checks each update against the per-repo alertOverrides defined back in the Build Repo Watchlist node. It resolves three things:
- finalUrgency: The alertOverride's
urgencyOverrideif set, otherwise the AI's urgency rating - isInstant:
trueif the repo hasinstantAlert: trueOR the urgency iscritical - channels: The repo's override channels, or the defaults from Configure Watcher
Splitting Instant vs. Batched
The Urgency Router (Switch node, typeVersion 3.2) splits the stream:
- Output 0 (instant):
isInstant === true— these get formatted and sent immediately as individual alerts - Output 1 (batch):
isInstant === false— these accumulate into one daily digest message
Critical security updates and repos with instantAlert: true always take the instant path. Everything else gets batched so you're not bombarded with a dozen individual messages when your stack does a coordinated release day.
Phase 7: Delivery
Formatting Instant Alerts
Format Instant Alert creates one notification per urgent release. Each contains:
- An urgency emoji and label
- The AI summary
- Breaking change details (if any)
- Security advisory (if any)
- Key changes list
- A ready-to-run docker update command
- A link to the full release notes
The node outputs pre-formatted payloads for all four channels at once: discordPayload, telegramMessage, slackText, and ntfyPayload.
Formatting the Daily Digest
Format Digest combines all batched updates into a single message sorted by urgency (critical first, then recommended, then optional). The output looks like this:
📦 Stack Update Digest — Feb 22, 2026
14 sources checked, 3 updates found.
🔴 CRITICAL: Vaultwarden 1.32.5
Security fix for auth bypass vulnerability.
⚠️ BREAKING: Password hash format changed, see migration guide.
🛡️ SECURITY: CVE-2026-1234 — authentication bypass in LDAP module.
Key changes: auth bypass fix; LDAP improvements; admin panel update
📋 Migration: Run vaultwarden migration tool before starting new version
💻 `docker pull vaultwarden/server:1.32.5 && docker compose up -d`
🔗 Heads up: Authelia depend on vaultwarden
→ https://github.com/dani-garcia/vaultwarden/releases/tag/1.32.5
🟠 RECOMMENDED: Immich v1.98.0
Intel Arc GPU transcoding support, thumbnail memory leak fix.
Key changes: GPU transcoding; memory leak fix; new map view
💻 `docker compose pull && docker compose up -d`
🔗 Heads up: Immich depend on postgres, redis
→ https://github.com/immich-app/immich/releases/tag/v1.98.0
🔵 OPTIONAL: Grafana v11.5.1
Minor UI fixes and dashboard loading performance improvement.
Key changes: dashboard loading speed; panel editor UX; auth token refresh
💻 `docker pull grafana/grafana:v11.5.1 && docker compose up -d`
→ https://github.com/grafana/grafana/releases/tag/v11.5.1
— 1 critical, 1 recommended, 1 optional.
The digest also includes dependency tracking. If you configured dependsOn in your watchlist (e.g., Immich depends on postgres and redis), and both Immich and PostgreSQL have updates in the same digest, the message notes the dependency. This is useful when you need to coordinate the update order — always update the dependency first.
The update commands are generated per-repo from a lookup table inside the Format Digest code:
const dockerUpdateMap = {
'immich-app/immich': 'docker compose pull && docker compose up -d',
'dani-garcia/vaultwarden': 'docker pull vaultwarden/server:TAG && docker compose up -d',
'traefik/traefik': 'docker pull traefik:TAG && docker compose up -d',
'n8n-io/n8n': 'docker pull n8nio/n8n:TAG && docker compose up -d',
'grafana/grafana': 'docker pull grafana/grafana:TAG && docker compose up -d',
// ... more entries
};
TAG gets replaced with the actual version number at runtime. For Docker Hub and GHCR items, the command is built dynamically from the namespace and image name. For repos not in the map, the fallback is a comment: # Check owner/repo release notes for upgrade instructions.
The digest node also generates per-channel formatted versions:
- Discord: Rich embed objects with color coding (red
16711680= critical, orange16750848= recommended, blue3381503= optional) - Slack: Markdown-formatted text with
*bold*labels, triple-backtick code blocks for update commands, and<url|text>links - ntfy: Structured payload with priority levels (5 = max/critical, 4 = high/recommended, 3 = default/optional) and emoji tags
Channel Router
Both the instant alerts and the digest feed into the Channel Router (Switch node, typeVersion 3.2) with allMatchingOutputs: true. This means items flow to multiple outputs simultaneously. The router checks that the relevant payload fields exist and routes accordingly.
Discord
We use an HTTP Request node instead of n8n's built-in Discord node because webhooks are simpler to configure — no bot token needed, no OAuth, just a URL. Create a webhook in your Discord server settings: Server Settings > Integrations > Webhooks > New Webhook. Copy the URL and paste it in Configure Watcher's discord_webhook_url field.
Send Discord (HTTP Request node, typeVersion 4.2) POSTs to your webhook URL:
// The request body is a Discord webhook payload:
{
"content": "📦 **Stack Update Digest** — Feb 22, 2026 (14 sources checked)",
"embeds": [
{
"title": "🔴 Vaultwarden 1.32.5",
"description": "Security fix for auth bypass...\n\n⚠️ **BREAKING:** Password hash format changed\n\n🛡️ **SECURITY:** CVE-2026-1234\n\n```\ndocker pull vaultwarden/server:1.32.5 && docker compose up -d\n```",
"url": "https://github.com/dani-garcia/vaultwarden/releases/tag/1.32.5",
"color": 16711680,
"footer": { "text": "CRITICAL | SECURITY" }
}
]
}
A gotcha we hit: Discord's CDN (Cloudflare) returns 403 if you don't include a User-Agent header. The node explicitly sends User-Agent: n8n-release-watcher/1.0.
Each update gets its own embed within a single webhook call. Discord limits webhooks to 10 embeds per message, so if you somehow have more than 10 updates in one digest (that would be quite a day), the extras wrap into a second message. The onError: "continueRegularOutput" setting means a failed Discord delivery won't prevent the other channels from firing.
Telegram
Send Telegram uses the built-in Telegram node (n8n-nodes-base.telegram, typeVersion 1.2). To set this up:
- Message
@BotFatheron Telegram and create a new bot (full guide) - Copy the bot token
- In n8n: Credentials > Add Credential > search "Telegram" > paste the token
- Message your bot or add it to a group chat
- Get your chat ID by messaging
@userinfobot(for personal) or checking the group chat URL
The chat ID from Configure Watcher gets passed to the node. The message is plain text with Unicode emoji — Telegram renders it cleanly. For group chats, the chat ID is negative (e.g., -1001234567890).
Slack
Send Slack uses the Slack node (n8n-nodes-base.slack, typeVersion 2.2). You'll need a Slack app with a bot token — the Slack bot token guide walks through the full setup. The channel ID is the alphanumeric string from your channel URL (looks like C0ADFQ9GSBV). Don't use the channel name — n8n's Slack v2 node requires the ID wrapped in a resourceLocator object, which happens automatically when you set the select: "channel" parameter.
The message uses Slack's markdown: *bold* for emphasis, triple backticks for code blocks, and <url|text> for clickable links. Each release gets its own section with the update command in a code block.
ntfy
Send ntfy is an HTTP Request node that POSTs to https://ntfy.sh/{topic} with custom headers for the title, priority, and tags:
POST https://ntfy.sh/release-watcher
Headers:
Title: Stack Update Digest — Feb 22, 2026
Priority: 4
Tags: package,warning
Body: 3 updates: 1 critical, 1 recommended, 1 optional
ntfy is the simplest channel to set up (setup guide). No accounts, no tokens, no OAuth. Just pick a topic name, subscribe on your phone via the ntfy Android app or iOS app, and notifications appear. The priority header maps to Android notification priorities, so critical updates (priority 5) will break through Do Not Disturb mode.
The only catch: ntfy topics are public by default. Anyone who guesses your topic name can subscribe. Use something non-obvious — my-homelab-release-watcher-7f3a is better than releases. If you need privacy, ntfy supports authentication on self-hosted instances, but for release notifications there's nothing sensitive in the payloads.
All four channel nodes have onError: "continueRegularOutput" set. If one channel fails (Discord webhook expired, Telegram bot got blocked, etc.), the others still fire. Silent failures won't eat your notifications.
Phase 8: Version Storage
After successful delivery, the Update Stored Versions Code node writes the new version tags back to workflow static data:
const staticData = $getWorkflowStaticData('global');
const updates = $input.first().json.updates || [];
for (const u of updates) {
staticData[u.key] = u.tag;
}
return [{ json: { versionsUpdated: updates.length, versions: updates } }];
This ensures each release is only reported once. The next time the workflow runs, Compare Versions will see that these tags match what's stored and skip them.
Static data persists in n8n's database as part of the workflow record. It survives n8n restarts and container recreations. The only way to lose it is to delete and reimport the workflow, which creates a new workflow with a fresh static data store.
One caveat: static data writes do NOT persist during manual test executions in n8n. Only scheduled or webhook-triggered runs actually save. This is by design in n8n. When you click "Test workflow," the first-run seeding happens in memory but won't actually be written. Your second manual test will also see it as a first run. Once you toggle the workflow active and it fires on schedule, static data persists normally.
Testing Your Workflow
Before going live, test it step by step.
First Test Run
Click Test workflow (the play button at the top). This executes the entire chain from the Schedule Trigger.
Expected behavior on the first run:
- Configure Watcher outputs your settings
- Build Repo Watchlist outputs your repos
- Auto-detect (if enabled) parses your compose and adds more repos
- Merge and deduplicate produce the final list
- Source Router splits GitHub from registry items
- All releases are fetched successfully
- Compare Versions sees no stored versions, seeds everything, returns
_firstRun: true - Has Updates? returns false (no
tagNamefield on the first-run marker) - Workflow stops. No alerts sent.
Check the execution log — every node should show green checkmarks. If Fetch Latest Release shows errors, check that your repos exist and have published releases (not just tags).
Second Test Run
Run it again immediately. This time, Compare Versions has the seeded data in memory (even though it won't persist, it's there for this session). Every tag matches. Returns _noUpdates: true. Clean stop.
Testing Update Detection
To actually test the alert path, you need to trick Compare Versions into thinking something changed. Edit the static data directly:
- Open the Compare Versions node
- In the n8n editor, the static data isn't directly editable through the UI. Instead, add a temporary line at the top of the Code node:
// TEMPORARY: Force an old version for testing
staticData['immich-app/immich'] = 'v1.0.0';
- Run the workflow. It will detect that Immich's current release is different from
v1.0.0and trigger the full AI analysis and delivery chain. - Check your Discord/Telegram/Slack/ntfy for the alert.
- Remove the temporary line before going live.
Verifying Individual Channels
If you're not sure whether a channel works, test them individually before running the full workflow.
Discord: Send a test webhook with curl:
curl -X POST "YOUR_DISCORD_WEBHOOK_URL" \
-H "Content-Type: application/json" \
-H "User-Agent: n8n-release-watcher/1.0" \
-d '{"content": "Test from release watcher"}'
If you get a 204 response, the webhook is valid. If you get 403, you're missing the User-Agent header (Cloudflare blocks bare requests to Discord webhooks).
ntfy: Open https://ntfy.sh/YOUR_TOPIC in a browser tab to watch for messages, then from a terminal:
curl -d "Test notification" \
-H "Title: Release Watcher Test" \
-H "Priority: 3" \
https://ntfy.sh/your-topic-name
Telegram: If you've already set up the bot, send it a message first (bots can't initiate conversations), then test via the Telegram Bot API:
curl "https://api.telegram.org/bot{YOUR_BOT_TOKEN}/sendMessage?chat_id={YOUR_CHAT_ID}&text=Test"
Slack: The easiest test is to use the Slack node directly in n8n — create a test workflow with just the Slack node, hardcode a message, and run it.
Common Issues During Testing
A few things that might trip you up:
- Empty execution after Fetch Latest Release: The repo might not have any GitHub Releases (only tags). The GitHub Releases API returns 404 for repos that use lightweight tags instead of the Releases feature. Remove those repos from your watchlist or switch them to Docker Hub monitoring.
- Rate limit errors on multiple test runs: Each test run uses one API call per GitHub repo. If you're iterating quickly with 10+ repos and no token, you'll exhaust the 60/hour limit fast. Add the token or reduce your test watchlist temporarily.
- Telegram "chat not found": The bot needs to receive a message from you first before it can send to your chat. Open the bot in Telegram and send
/start. - Discord 403 error: Missing User-Agent header. The workflow includes it, but if you're testing manually with curl, don't forget it.
Going Live
Once your tests pass:
- Remove any debug code you added for testing
- Toggle the workflow Active using the switch in the top-right corner of the n8n editor
- The workflow will now run daily at 8 AM on your configured timezone
The first scheduled run will be a true first run — it seeds all versions and sends no alerts. From the second day onward, you'll get notifications whenever something changes.
Adjusting the Schedule
To change the time, open the Schedule Trigger node and modify the cron expression:
| Cron Expression | When it runs |
|---|---|
0 8 * * * | Daily at 8:00 AM |
0 7 * * 1-5 | Weekdays at 7:00 AM |
0 */6 * * * | Every 6 hours |
0 8,20 * * * | Twice daily, 8 AM and 8 PM |
30 6 * * * | Daily at 6:30 AM |
After changing the cron expression, deactivate and reactivate the workflow for the new schedule to register.
Monitor your workflow through n8n's Executions tab. Each run shows whether it completed successfully, how long it took, and which nodes ran. Failed executions show which node errored and the error message. A healthy run with no updates typically finishes in under 30 seconds. A run with several updates that triggers AI analysis and multi-channel delivery takes a couple of minutes depending on how fast the Claude API responds.
For the full backstory on building this workflow — what went wrong, what we learned, and what we'd do differently — check the build log.
What a Typical Week Looks Like
Most days, the workflow detects zero updates. It checks all repos, compares versions, finds nothing new, and stops. Total execution: a few seconds of API calls. No notifications.
Maybe twice a week, one or two repos have updates. Claude analyzes the changelogs, rates them as "recommended" or "optional," and they get batched into a single digest message. You get one notification that morning.
Once in a while — maybe monthly — something critical drops. A Vaultwarden security patch, a Traefik CVE fix. Those trigger the instant alert path and land on your phone within minutes of the workflow running. That's the run that makes the whole thing worth setting up.
What's Next
The workflow as built covers the most common use cases, but there's plenty of room to customize:
Add more repos. Just edit the repos array in Build Repo Watchlist. If you add more than 50 GitHub repos, you'll definitely want a GitHub token.
Switch AI models. Swap claude-haiku-4-5-20251001 for claude-sonnet-4-5-20250929 in the Claude Haiku node if you want more detailed analysis. Cost goes up roughly 10x, but the summaries are noticeably better for complex changelogs.
Tighten the schedule. If you want faster detection, change the cron to every 4 or 6 hours. GitHub rate limits are the main constraint — with a token and 15 repos, you can comfortably run every 2 hours.
Build a Grafana dashboard. If you set up the release_history table, the migration includes a v_release_history_daily view that aggregates releases by day, source, and urgency. Point a Grafana PostgreSQL data source at it for release frequency and security trend panels. Useful for answering "how many security updates did my stack have this quarter?"
Add GHCR private registries. If you use private GitHub Container Registry images, the Fetch Registry Tags node supports authentication via the same GitHub token. You'd add an Authorization header similar to the GitHub Releases API call.
Wire up a Grafana alert. If the release_history table shows a security advisory that hasn't been resolved (i.e., you haven't updated yet), Grafana can fire a separate alert reminding you. The table has everything you need: security_advisory, ai_urgency, detected_at.
For more n8n workflows in this series, check out the AI News Digest for monitoring tech news sources and the SSL Certificate Watcher for tracking certificate expiration across your domains.