The 5 Docker Backup Profiles Every Homelab Needs
Most homelab backup guides treat every container the same. Back up all the volumes. Run daily. Keep 7 snapshots. Move on.
That's wrong. A PostgreSQL database and a Plex metadata folder have completely different backup requirements. The database needs a pre-backup dump command, extended retention, and import-based restore verification. The Plex folder needs to exclude 4TB of media files and only keep a week of snapshots.
Five patterns cover nearly every Docker service in a homelab. Each one is a profile: a preset combination of pre-hooks, stop behavior, retention policy, and verification type.
Profile 1: Database
For: PostgreSQL, MySQL/MariaDB, MongoDB, InfluxDB
The database profile exists because naive volume backup of a running database produces an inconsistent snapshot. If PostgreSQL is mid-write when you copy the data directory, you get a backup that pg_restore will reject.
The fix: run a dump command before backup. The profile auto-detects the database type from the Docker image name and generates the right command:
| Image contains | Dump command |
|---|---|
postgres, pgvector, timescale, postgis | pg_dump -Fc -U $POSTGRES_USER -f /tmp/backup/dump.sql $POSTGRES_DB |
mysql, mariadb, percona | mysqldump -uroot -p$MYSQL_PASSWORD --all-databases > /tmp/backup/dump.sql |
mongo | mongodump --archive=/tmp/backup/dump.archive |
The detection isn't just matching exact image names. A container running pgvector/pgvector:pg17 or timescale/timescaledb:latest-pg16 gets recognized as PostgreSQL and receives the right pg_dump hook. Since pgvector and TimescaleDB are common in homelabs doing anything with AI or time-series data, this matters.
The dump runs inside the container via docker exec, so it uses the database binary that matches the running version. No version mismatch issues.
Retention is extended: 7 daily, 4 weekly, 6 monthly. Databases tend to be the most valuable data in a homelab, and storage is cheap thanks to restic dedup. A week of PostgreSQL dumps with deduplication might add 50-100MB of unique data total.
One gotcha with PostgreSQL: use custom format (-Fc), not plaintext SQL. Custom format is binary, imports 10-30x faster, and supports parallel restore. Plaintext dumps work but are painfully slow for anything over 50MB.
labels:
backup.nxsi.enable: "true"
backup.nxsi.profile: "database"
backup.nxsi.verify-type: "database"
backup.nxsi.verify-image: "postgres:17"
Profile 2: Critical
For: Vaultwarden, Bitwarden, Gitea, Forgejo, Authelia, Keycloak, Nextcloud
Critical services are the ones where data loss means real pain. Your password vault. Your Git repositories. Your authentication provider.
The key behavior: stop the container during backup. This guarantees a consistent point-in-time copy. No writes happening, no partial transactions, no file locks.
labels:
backup.nxsi.enable: "true"
backup.nxsi.profile: "critical"
backup.nxsi.verify-url: "http://localhost:80"
The downtime is brief -- typically 5-15 seconds for the stop, backup, and restart cycle. For a personal Vaultwarden instance that means your password autofill fails for 10 seconds at 2 AM. Acceptable.
If downtime is not acceptable (maybe you run a public-facing Gitea), you can override the stop behavior with backup.nxsi.stop: "false" while keeping the critical retention policy. But you lose the consistency guarantee. Trade-offs.
Retention is maximum: 30 daily, 12 weekly, 12 monthly. That's a full year of monthly snapshots. For data this important, the storage cost is negligible.
Profile 3: Config-Only
For: Pi-hole, AdGuard, Homepage, Dashy, Portainer, Traefik, Caddy
Some services are valuable but tiny. Pi-hole's configuration is a few dozen KB of lists, local DNS records, and settings. Traefik's dynamic config and TLS certs fit in under a megabyte.
The config-only profile:
- No pre-hooks (no database to dump)
- No stop (configs rarely change mid-backup)
- 90-day flat retention (daily snapshots, no weekly/monthly rollup)
- Exclude logs and cache
labels:
backup.nxsi.enable: "true"
backup.nxsi.profile: "config-only"
90 days of flat retention means you can recover from a bad config change made up to three months ago. For services where the config IS the service (Pi-hole without its custom lists is useless), this is the right policy.
Deduplication makes this nearly free. If Pi-hole's config doesn't change for a week, restic stores one copy and seven pointers. The actual disk cost of 90 daily snapshots of an unchanging 50KB config: 50KB.
Profile 4: Large Media
For: Plex, Jellyfin, Emby, Sonarr, Radarr, *arr stack
Media servers are a trap. Plex's metadata database and settings are valuable -- your watch history, library organization, custom posters, user accounts. The media files themselves (your 4TB movie collection) are replaceable and should never go through restic.
The large-media profile excludes media file extensions:
*.mp4, *.mkv, *.avi, *.mov, *.mp3, *.flac, *.wav, *.iso, *.img
Transcodes/, Cache/
This reduces a 4TB Plex installation to a 200MB backup of metadata and configuration. Restic handles that in seconds.
labels:
backup.nxsi.enable: "true"
backup.nxsi.profile: "large-media"
Retention is short: 7 daily snapshots. Media metadata changes frequently (every watch updates the database), so older snapshots lose value quickly. If you need to rebuild Plex, last week's metadata is almost as good as yesterday's.
The real protection for media files isn't backup -- it's redundancy. RAID, multiple drives, or a NAS with drive mirroring. Backup is for data that can't be re-downloaded.
(If you're running Sonarr/Radarr, those services can re-download everything from your indexers. Back up the config, not the media.)
Profile 5: Default
For: Everything else -- nginx, custom apps, development services, monitoring tools
The default profile makes no assumptions:
- No pre-hooks
- No stop
- Standard retention: 7 daily, 4 weekly, 3 monthly
- Back up all named volumes
- File-based verification
labels:
backup.nxsi.enable: "true"
That's it. Two labels. The backup system auto-detects all named volumes attached to the container and includes them.
For most services, the default profile is correct. Only use a specific profile when the service needs something different (database dumps, stop behavior, extended retention, media exclusions).
The Label System
All five profiles are configured through Docker labels on your containers. No central config file. No YAML mapping containers to profiles. The backup configuration lives where the service is defined.
Why this matters:
Adding a service: Add the labels to your compose file, run docker compose up -d, done. The next backup run picks it up automatically.
Removing a service: Remove the container. The backup system no longer sees it. Old snapshots are retained per the retention policy and eventually pruned.
Moving between hosts: The labels travel with the compose file. Deploy the same stack on a new server, install the backup system, and every service is configured immediately.
Per-service overrides: Any profile setting can be overridden via labels:
labels:
backup.nxsi.enable: "true"
backup.nxsi.profile: "database"
backup.nxsi.retention: "30d,12w,12m" # Override database default
backup.nxsi.schedule: "0 */6 * * *" # Every 6 hours instead of daily
The override precedence: explicit label > profile default > global .env default.
Migrating from Manual Backups
If you're currently using rsync cron jobs or Duplicati, the migration is straightforward:
- Map each rsync target to a container
- Add the appropriate profile label
- Run
./scripts/migrate.shto detect existing tools and get specific migration steps - Run a parallel backup with both systems for one week
- Verify with
./scripts/verify.sh --latest - Decommission the old system
The backup system detects Duplicati containers, docker-volume-backup (offen) labels, and cron-based backup jobs, then provides per-tool migration guidance.
Getting the 3-2-1 Rule Right
All five profiles support dual storage: a local repository for fast restores and a remote repository (S3, B2, SFTP) for disaster protection.
The recommended setup for most homelabs:
- Primary: Local disk or NAS (fast restores)
- Secondary: Backblaze B2 (offsite, ~$0.005/GB/month)
After each backup run, new snapshots are automatically copied to the secondary repository. Both repositories use the same encryption key, so you can restore from either one.
The Homelab Backup Automation Stack includes all 5 profiles, the label system, automated restore verification, and health scoring. Available at nxsi.io.