Go to file
Astro 6a4a3ca035 oxigraph: init 2023-01-20 22:55:33 +01:00
config nix-daemon: restart worker processes when restarting daemon 2023-01-18 20:51:38 +01:00
doc README: update list-upgradable documentation 2021-09-08 01:45:28 +02:00
hosts oxigraph: init 2023-01-20 22:55:33 +01:00
keys add 0xA keys to the admins 2022-12-26 20:05:23 +01:00
modules modules/cluster/customization/options: move types into separate file so that options can be merged on extendModules 2023-01-20 18:51:40 +01:00
overlays oxigraph: init 2023-01-20 22:55:33 +01:00
.git-blame-ignore-revs Ignore formatting in blames 2022-06-12 17:27:07 +02:00
.gitignore Add "result" to .gitignore 2019-11-29 14:21:52 +01:00
.sops.yaml kibana: remove 2023-01-17 00:52:16 +01:00
README.md modules/cluster: remove broken glusterfs setup 2023-01-13 01:35:20 +01:00
flake.lock flake.lock: Update 2023-01-20 22:20:19 +01:00
flake.nix oxigraph: init 2023-01-20 22:55:33 +01:00
packages.nix Fix nixos-rebuild missing --override-input's 2023-01-17 00:38:38 +01:00
ssh-public-keys.nix Delete old known_hosts option 2023-01-04 22:36:18 +01:00

README.md

C3D2 infrastructure based on NixOS

Setup

Enable nix flakes user wide

Add the setting to the user nix.conf. Only do this once!

echo 'experimental-features = nix-command flakes' >> ~/.config/nix/nix.conf

Enable nix flakes system wide (preferred for NixOS)

add this to your NixOS configuration:

nix.settings.experimental-features = "nix-command flakes";

The secrets repo

is deprecated. Everything should be done through sops. If you don't have secrets access ask sandro or astro to get onboarded.

Deployment

Deploy to a remote NixOS system

For every host that has a nixosConfiguration in our Flake, there are two scripts that can be run for deployment via ssh.

  • nix run .#HOSTNAME-nixos-rebuild switch

    Copies the current state to build on the target system. This may fail due to resource limits on eg. Raspberry Pis.

  • nix run .#HOSTNAME-nixos-rebuild-local switch

    Builds everything locally, then uses nix copy to transfer the new NixOS system to the target.

    To use the cache from hydra set the following nix options similar to enabling flakes:

    trusted-public-keys = nix-serve.hq.c3d2.de:KZRGGnwOYzys6pxgM8jlur36RmkJQ/y8y62e52fj1ps=
    trusted-substituters = https://nix-serve.hq.c3d2.de
    

Checking for updates

nix run .#list-upgradable

list-upgradable output

Checks all hosts with a nixosConfiguration in flake.nix.

Update from Hydra build

The fastest way to update a system, a manual alternative to setting c3d2.autoUpdate = true;

Just run:

update-from-hydra

Deploy a MicroVM

Build a microvm remotely and deploy

nix run .#microvm-update-HOSTNAME

Build microvm locally and deploy

nix run .#microvm-update-HOSTNAME-local

Update MicroVM from our Hydra

Our Hydra runs nix flake update daily in the updater.timer, pushing it to the flake-update branch so that it can build fresh systems. This branch is setup as the source flake in all the MicroVMs, so the following is all that is needed on a MicroVM-hosting server:

microvm -Ru $hostname

Cluster deployment with Skyflake

About

Skyflake provides Hyperconverged Infrastructure to run NixOS MicroVMs on a cluster. Our setup unifies networking with one bridge per VLAN. Persistent storage is replicated with Cephfs.

Recognize nixosConfiguration for our Skyflake deployment by the self.nixosModules.cluster-options module being included.

User interface

We use the less-privileged c3d2@ user for deployment. This flake's name on the cluster is config. Other flakes can coexist in the same user so that we can run separately developed projects like dump-dvb. leon and potentially other users can deploy Flakes and MicroVMs without name clashes.

Deploying

git push this repo to any machine in the cluster, preferably to Hydra because there building won't disturb any services.

You don't deploy all MicroVMs at once. Instead, Skyflake allows you to select NixOS systems by the branches you push to. You must commit before you push!

Example: deploy nixosConfigurations mucbot and sdrweb (HEAD is your current commit)

git push c3d2@hydra.serv.zentralwerk.org:config HEAD:mucbot HEAD:sdrweb

This will:

  1. Build the configuration on Hydra, refusing the branch update on broken builds (through a git hook)
  2. Copy the MicroVM package and its dependencies to the binary cache that is accessible to all nodes with Cephfs
  3. Submit one job per MicroVM into the Nomad cluster

Deleting a nixosConfiguration's branch will stop the MicroVM in Nomad.

Updating

TODO: how would you like it?

MicroVM status

ssh c3d2@hydra.serv.zentralwerk.org status

Debugging for cluster admins

Nomad

Check the cluster state
nomad server members

Nomad servers coordinate the cluster.

Nomad clients run the tasks.

Browse in the terminal

wander and damon are nice TUIs that are preinstalled on our cluster nodes.

Browse with a browser

First, tunnel TCP port :4646 from a cluster server:

ssh -L 4646:localhost:4646 root@server10.cluster.zentralwerk.org

Then, visit https://localhost:4646 for for full klickibunti.

Reset the Nomad state on a node

After upgrades, Nomad servers may fail rejoining the cluster. Do this to make a Nomad server behave like a newborn:

systemctl stop nomad
rm -rf /var/lib/nomad/server/raft/
systemctl start nomad

Secrets management

Secrets Management Using sops-nix

Adding a new host

Edit .sops.yaml:

  1. Add an AGE key for this host. Comments in this file tell you how to do it.
  2. Add a creation_rules section for host/$host/*.yaml files

Editing a hosts secrets

Edit .sops.yaml to add files for a new host and its SSH pubkey.

# Get sops
nix develop
# Decrypt, start en EDITOR, encrypt
sops hosts/.../secrets.yaml
# Push
git commit -a -m Adding new secrets
git push origin

Secrets management with PGP

Add your gpg-id to the .gpg-id file in secrets and let somebody reencrypt it for you. Maybe this works for you, maybe not. I did it somehow:

PASSWORD_STORE_DIR=`pwd` tr '\n' ' ' < .gpg-id | xargs -I{} pass init {}

Your gpg key has to have the Authenticate flag set. If not update it and push it to a keyserver and wait. This is necessary, so you can login to any machine with your gpg key.

Laptops / Desktops

This repository contains a NixOS module that can be used with personal machines as well. This module appends /etc/ssh/ssh_known_hosts with the host keys of registered HQ hosts, and optionally appends /etc/hosts with static IPv6 addresses local to HQ. Simply import the lib directory to use the module. As an example:

# /etc/nixos/configuration.nix
{ config, pkgs, lib, ... }:
let
  # Using a flake is recommended instead
  c3d2Config = builtins.fetchGit { url = "https://gitea.c3d2.de/C3D2/nix-config.git"; };
in {
  imports = [
    "${c3d2Config}/modules/c3d2.nix"
  ];

  c3d2 = {
    ...
  };
}

Server zfs setup

For the other steps follow https://nixos.org/manual/nixos/unstable/index.html#sec-installation

sgdisk --zap-all /dev/sda
parted /dev/sda -- mklabel gpt
parted /dev/sda -- mkpart primary 512MB -40GB
parted /dev/sda -- mkpart primary linux-swap -40GB 100%
parted /dev/sda -- mkpart ESP fat32 1MB 512MB
parted /dev/sda -- set 3 esp on
mkswap -L swap /dev/sda2
mkfs.fat -F 32 -n boot /dev/sda3
pool create \
  -o ashift=12 \
  -o autotrim=on \
  -R /mnt \
  -O acltype=posixacl \
  -O canmount=off \
  -O compression=zstd \
  -O dnodesize=auto \
  -O normalization=formD \
  -O relatime=on \
  -O xattr=sa \
  -O mountpoint=/ \
  hydra /dev/sda1
 zfs create -o canmount=on -o mountpoint=/ hydra/nixos
 zfs create -o canmount=on -o mountpoint=/nix hydra/nixos/nix
 zfs create -o canmount=on -o atime=off -o mountpoint=/nix/store hydra/nixos/nix/store
 zfs create -o canmount=on -o mountpoint=/nix/var hydra/nixos/nix/var
 zfs create -o canmount=off -o mountpoint=none hydra/data
 zfs create -o canmount=on -o mountpoint=/etc hydra/data/etc
 zfs create -o canmount=on -o mountpoint=/var hydra/data/var
 zfs create -o canmount=on -o mountpoint=/var/backup hydra/data/var/backup
 zfs create -o canmount=on -o mountpoint=/var/lib hydra/data/var/lib
 zfs create -o canmount=on -o mountpoint=/var/log hydra/data/var/log
 zfs create -o canmount=on -o mountpoint=/home hydra/data/home
 zfs create -o canmount=off -o mountpoint=none -o refreservation=1G hydra/reserved