A reproducible homelab/infra setup managing a router, server, and workstation with Nix flakes. Orchestrated and deployed with Clan, with automated secrets management and additional ergonomics via a custom vars-helper. The repo largely follows the Dendritic pattern, organizing configuration as composable flake-parts modules.
- Orchestration & deployment: Clan framework
- Secrets: Clan automated secrets + vars-helper for ACLs and access ergonomics
- Pattern: Dendritic (every file is a flake-parts module)
- Key modules:
- Router abstraction (routing, DHCP, DNS, WireGuard, nginx, monitoring)
- Backups abstraction (restic to B2/S3, auto bucket bootstrap, restore mode)
- Clan: clan.lol
- Dendritic pattern: github.com/mightyiam/dendritic
This repo is designed to be driven by Clan, providing:
- Uniform interface across machines and services
- Automated secret management and provisioning
- Automated service setup and backups
- Peer-to-peer mesh VPN and live overwrites
See: https://clan.lol/.
Configuration is authored as flake-parts modules, promoting reuse across NixOS and Home Manager scopes, and enabling cross-cutting concerns. Values are shared via the flake config rather than ad-hoc specialArgs.
See: https://github.com/mightyiam/dendritic.
Secrets are declared and generated via Clan. The custom vars-helper adds:
- Secret discovery from a generators directory with tag filtering
- ACLs to grant read access to specific systemd units/services
- Ergonomics around reading secrets paths from the declarative config
See: https://github.com/perstarkse/clan-vars-helper.
Example usage in machines/makemake/configuration.nix:
my.secrets.discover = {
enable = true;
dir = ../../vars/generators;
includeTags = ["makemake" "minne" "surrealdb" "b2"];
};
my.secrets.allowReadAccess = [
{
readers = ["minne"];
path = config.my.secrets.getPath "minne-env" "env";
}
{
readers = ["surrealdb"];
path = config.my.secrets.getPath "surrealdb-credentials" "credentials";
}
];machines/io: Router (LAN bridge, DHCP, DNS, WireGuard, nginx, monitoring)machines/makemake: Server (Vaultwarden, OpenWebUI, SurrealDB, Minne)machines/charon: Workstation (Worker of distributed services)machines/sedna: External monitor
Each machine imports shared modules via flake-parts, follows consistent patterns, and consumes secrets declaratively.
Recommended commands:
nix build path:.#router-checks— router integration suite (router-smoke,router-vlan-regression,router-services,router-port-forward,router-wireguard).nix build path:.#predeploy-check—io-predeployonly (realmachines/io/configuration.nixwith test overrides/stubs).nix build path:.#final-checks— router suite +io-predeploy.nix flake check path:.— all configured checks in this flake.
Useful targeted checks:
nix build path:.#checks.x86_64-linux.router-servicesfor nginx/domain routing changes.nix build path:.#checks.x86_64-linux.router-port-forwardfor NAT/port-forward changes.nix build path:.#checks.x86_64-linux.io-predeployfor fulliopredeploy coverage only.
Notes:
- Prefer
path:.#...during local work; it includes uncommitted files. nix build path:.#checks.x86_64-linuxbuilds all checks for that system.- Add
--show-traceto any command for full error traces.
Use the machine-update command for deploying machines with automatic preflight checks:
machine-update <machine> [<machine> ...] [options]machine-update resolves profile tags from Clan inventory (check-profile-*) and runs the union:
| Profile tag | Additional checks |
|---|---|
check-profile-fast |
none (treefmt only) |
check-profile-router |
router-checks |
check-profile-io-predeploy |
predeploy-check |
check-profile-io-final |
final-checks |
check-profile-garage |
garage-checks |
check-profile-politikerstod |
politikerstod-checks |
check-profile-wireguard |
wireguard-checks |
check-profile-paperless |
paperless-checks |
check-profile-backups |
backups-checks |
All machines always run nix fmt + treefmt verification first (unless --force).
--force— Skip all preflight checks and deploy immediately.--checks-only— Run preflight checks only, skip deploy.--explain— Print resolved profile/check plan and exit.--base-ref <ref>— Baseline for dynamic lockfile detectors.--clan-help— Show upstreamclan machines updatehelp and exit.- Multiple machine names can be passed and will be deployed sequentially.
- Extra Clan flags can be forwarded after
--.
# Deploy io with full checks
machine-update io
# Run checks only for io (no deploy)
machine-update io --checks-only
# Deploy io, skipping checks (fast)
machine-update io --force
# Explain plan only
machine-update io --explain
# Deploy makemake (fast check only)
machine-update makemake
# Deploy two machines; overlapping checks run once
machine-update ariel makemake
# Deploy with extra Clan flags
machine-update ariel -- --debug --host-key-check accept-new
# Show upstream Clan update help
machine-update --clan-help
# Plan only (JSON)
machine-update-plan io --jsonNotes:
iois configured withclan.core.deployment.requireExplicitUpdate = true, so broad updates do not include it by accident.ioalways gets mandatorycheck-profile-io-final(router + io-predeploy) regardless of inventory tags.--forceis blocked forioto prevent bypassing critical safety checks.- In multi-machine mode,
--forcemust be allowed for every selected machine. machine-updatevalidates machine names before running checks; unknown names fail fast.- Multiple
check-profile-*tags on a machine run as a union of checks. - Multiple machines run the deduplicated union of all required checks once before deploy.
- Dynamic lockfile detectors can append checks (e.g.
politikerstodinput changes addpolitikerstod-checks). - Detector warnings are non-blocking and shown before checks.
Additional local check bundles:
nix build path:.#garage-checksnix build path:.#politikerstod-checksnix build path:.#wireguard-checksnix build path:.#paperless-checksnix build path:.#backups-checksnix build path:.#backups-multi-checksnix build path:.#backups-failure-checks
- Path:
modules/system/router/core.nix - Consumers: e.g.
machines/io/configuration.nix
- LAN: bridge with configurable interfaces and subnet
- DHCP: Kea with declarative leases and timings
- DNS: Unbound with DoT upstreams and local zone
- WireGuard: server with peers, keepalive, LAN routing
- nginx reverse proxy: ACME automation (including DNS-01 per-vhost), Cloudflare-only or LAN-only ACLs, WebSocket support, extra config snippets
- Monitoring: Prometheus exporters (node, unbound), Prometheus, optional Grafana, Netdata, ntopng
my.router = {
enable = true;
hostname = "io";
lan = {
subnet = "10.0.0";
dhcpRange = { start = 100; end = 200; };
interfaces = ["enp2s0" "enp3s0" "enp4s0"];
};
wan.interface = "enp1s0";
ipv6.ulaPrefix = "fd00:711a:edcd:7e75";
wireguard = {
enable = true;
peers = [
{
name = "phone";
ip = 2;
publicKey = "...";
persistentKeepalive = 25;
}
];
};
machines = [
{ name = "charon"; ip = "15"; mac = "f0:2f:74:de:91:0a"; portForwards = []; }
{ name = "makemake"; ip = "10"; mac = "00:d0:b4:02:bb:3c";
portForwards = [ { port = 25; } { port = 465; } { port = 993; } { port = 32400; } ];
}
];
dns = {
enable = true;
upstreamServers = [
"1.1.1.1@853#cloudflare-dns.com"
"1.0.0.1@853#cloudflare-dns.com"
"2606:4700:4700::1111@853#cloudflare-dns.com"
"2606:4700:4700::1001@853#cloudflare-dns.com"
];
localZone = "lan.";
};
nginx = {
enable = true;
acmeEmail = "email@domain.tld";
ddclient.enable = true;
virtualHosts = [
{ domain = "service.domain.tld"; target = "makemake"; port = 7909; websockets = true; cloudflareOnly = true; }
{
domain = "service2.domain.tld"; target = "makemake"; port = 3000; cloudflareOnly = true; websockets = false;
extraConfig = ''
proxy_set_header Connection "close";
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
'';
}
# Example DNS-01 per-vhost
{ domain = "service.domain.tld"; target = "makemake"; port = 8322; websockets = true; lanOnly = true;
acmeDns01 = {
dnsProvider = "cloudflare";
environmentFile = config.my.secrets.getPath "api-key-cloudflare-dns" "api-token";
};
}
];
};
};Configuration options are self-documented in modules/system/router/core.nix via mkOption, including defaults for interfaces, ULA prefix, exporter settings, and nginx access controls.
- Path:
modules/system/backups.nix - Consumers: e.g.
machines/makemake/configuration.nix
- Provider: restic to Backblaze B2 or S3
- Secrets: repository URL, password, and provider creds provisioned via Clan + vars-helper
- Bootstrap: optional automatic bucket creation and server-side encryption (B2), optional lifecycle rules (keep prior versions)
- Scheduling: simple
hourly | daily | weekly - Include/Exclude: path filters per backup job
- Restore mode: flip a flag to run a one-shot restic restore to target path
my.backups = {
minne = {
enable = true;
path = config.my.minne.dataDir;
frequency = "daily";
backend = { type = "b2"; bucket = null; lifecycleKeepPriorVersionsDays = 30; };
};
vaultwarden = {
enable = true;
path = config.my.vaultwarden.backupDir;
frequency = "daily";
backend = { type = "b2"; bucket = null; lifecycleKeepPriorVersionsDays = 30; };
};
surrealdb = {
enable = true;
path = config.my.surrealdb.dataDir;
frequency = "daily";
backend = { type = "b2"; bucket = null; lifecycleKeepPriorVersionsDays = 30; };
};
};To restore, switch a job into restore mode and choose a snapshot:
my.backups.minne.restore = {
enable = true;
snapshot = "latest"; # or a specific snapshot ID
};The module sets up a restic-restore-<name> oneshot unit that restores into path using the provisioned repo, password, and env files.
The vars-helper augments Clan secrets with discovery, ACLs, exposing user secrets, and wrapping binaries with secret-backed environment variables.
my.secrets.discover = {
enable = true;
dir = ../../vars/generators;
includeTags = ["aws" "openai" "openrouter" "user"];
};my.secrets.exposeUserSecrets = [
{
enable = true;
secretName = "user-ssh-key";
file = "key";
user = config.my.mainUser.name;
dest = "/home/${config.my.mainUser.name}/.ssh/id_ed25519";
}
{
enable = true;
secretName = "user-age-key";
file = "key";
user = config.my.mainUser.name;
dest = "/home/${config.my.mainUser.name}/.config/sops/age/keys.txt";
}
];my.secrets.allowReadAccess = [
{
readers = [config.my.mainUser.name];
path = config.my.secrets.getPath "api-key-openai" "api_key";
}
{
readers = [config.my.mainUser.name];
path = config.my.secrets.getPath "api-key-openrouter" "api_key";
}
{
readers = [config.my.mainUser.name];
path = config.my.secrets.getPath "api-key-aws-access" "aws_access_key_id";
}
{
readers = [config.my.mainUser.name];
path = config.my.secrets.getPath "api-key-aws-secret" "aws_secret_access_key";
}
];home-manager.users.${config.my.mainUser.name} = {
my.secrets.wrappedHomeBinaries = [
{
name = "mods";
title = "Mods";
setTerminalTitle = true;
command = "${pkgs.mods}/bin/mods";
envVar = "OPENAI_API_KEY";
secretPath = config.my.secrets.getPath "api-key-openai" "api_key";
useSystemdRun = true;
}
];
};These abstractions let you declare who can read which secrets, where they should be materialized, and how to inject them into processes, all from Nix configuration.