nix-config/README.md

297 lines
7.7 KiB
Markdown

---
gitea: none
title: Flockige Infrastruktur deklarativ
include_toc: yes
lang: en
---
# C3D2 infrastructure based on NixOS
## Setup
## Enable nix flakes user wide
Add the setting to the user nix.conf. Only do this once!
```bash
echo 'experimental-features = nix-command flakes' >> ~/.config/nix/nix.conf
```
## Enable nix flakes system wide (preferred for NixOS)
add this to your NixOS configuration:
```nix
nix.settings.experimental-features = "nix-command flakes";
```
### The secrets repo
is deprecated. Everything should be done through sops.
If you don't have secrets access ask sandro or astro to get onboarded.
## Deployment
### Deploy to a remote NixOS system
For every host that has a `nixosConfiguration` in our Flake, there are two scripts that can be run for deployment via ssh.
- `nix run .#HOSTNAME-nixos-rebuild switch`
Copies the current state to build on the target system.
This may fail due to resource limits on eg. Raspberry Pis.
- `nix run .#HOSTNAME-nixos-rebuild-local switch`
Builds everything locally, then uses `nix copy` to transfer the new NixOS system to the target.
To use the cache from hydra set the following nix options similar to enabling flakes:
```
trusted-public-keys = nix-serve.hq.c3d2.de:KZRGGnwOYzys6pxgM8jlur36RmkJQ/y8y62e52fj1ps=
trusted-substituters = https://nix-serve.hq.c3d2.de
```
### Checking for updates
```shell
nix run .#list-upgradable
```
![list-upgradable output](doc/list-upgradable.png)
Checks all hosts with a `nixosConfiguration` in `flake.nix`.
### Update from [Hydra build](https://hydra.hq.c3d2.de/jobset/c3d2/nix-config#tabs-jobs)
The fastest way to update a system, a manual alternative to setting
`c3d2.autoUpdate = true;`
Just run:
```shell
update-from-hydra
```
### Deploy a MicroVM
#### Build a microvm remotely and deploy
```shell
nix run .#microvm-update-HOSTNAME
```
#### Build microvm locally and deploy
```shell
nix run .#microvm-update-HOSTNAME-local
```
#### Update MicroVM from our Hydra
Our Hydra runs `nix flake update` daily in the `updater.timer`,
pushing it to the `flake-update` branch so that it can build fresh
systems. This branch is setup as the source flake in all the MicroVMs,
so the following is all that is needed on a MicroVM-hosting server:
```shell
microvm -Ru $hostname
```
## Cluster deployment with Skyflake
### About
[Skyflake](https://github.com/astro/skyflake) provides Hyperconverged
Infrastructure to run NixOS MicroVMs on a cluster. Our setup unifies
networking with one bridge per VLAN. Persistent storage is replicated
with Cephfs.
Recognize nixosConfiguration for our Skyflake deployment by the
`self.nixosModules.cluster-options` module being included.
### User interface
We use the less-privileged `c3d2@` user for deployment. This flake's
name on the cluster is `config`. Other flakes can coexist in the same
user so that we can run separately developed projects like
*dump-dvb*. *leon* and potentially other users can deploy Flakes and
MicroVMs without name clashes.
#### Deploying
**git push** this repo to any machine in the cluster, preferably to
Hydra because there building won't disturb any services.
You don't deploy all MicroVMs at once. Instead, Skyflake allows you to
select NixOS systems by the branches you push to. **You must commit
before you push!**
**Example:** deploy nixosConfigurations `mucbot` and `sdrweb` (`HEAD` is your
current commit)
```bash
git push c3d2@hydra.serv.zentralwerk.org:config HEAD:mucbot HEAD:sdrweb
```
This will:
1. Build the configuration on Hydra, refusing the branch update on
broken builds (through a git hook)
2. Copy the MicroVM package and its dependencies to the binary cache
that is accessible to all nodes with Cephfs
3. Submit one job per MicroVM into the Nomad cluster
*Deleting* a nixosConfiguration's branch will **stop** the MicroVM in Nomad.
#### Updating
**TODO:** how would you like it?
#### MicroVM status
```bash
ssh c3d2@hydra.serv.zentralwerk.org status
```
### Debugging for cluster admins
#### Nomad
##### Check the cluster state
```shell
nomad server members
```
Nomad *servers* **coordinate** the cluster.
Nomad *clients* **run** the tasks.
##### Browse in the terminal
[wander](https://github.com/robinovitch61/wander) and
[damon](https://github.com/hashicorp/damon) are nice TUIs that are
preinstalled on our cluster nodes.
##### Browse with a browser
First, tunnel TCP port `:4646` from a cluster server:
```bash
ssh -L 4646:localhost:4646 root@server10.cluster.zentralwerk.org
```
Then, visit https://localhost:4646 for for full klickibunti.
##### Reset the Nomad state on a node
After upgrades, Nomad servers may fail rejoining the cluster. Do this
to make a *Nomad server* behave like a newborn:
```shell
systemctl stop nomad
rm -rf /var/lib/nomad/server/raft/
systemctl start nomad
```
## Secrets management
### Secrets Management Using `sops-nix`
#### Adding a new host
Edit `.sops.yaml`:
1. Add an AGE key for this host. Comments in this file tell you how to do it.
2. Add a `creation_rules` section for `host/$host/*.yaml` files
#### Editing a hosts secrets
Edit `.sops.yaml` to add files for a new host and its SSH pubkey.
```bash
# Get sops
nix develop
# Decrypt, start en EDITOR, encrypt
sops hosts/.../secrets.yaml
# Push
git commit -a -m Adding new secrets
git push origin
```
### Secrets management with PGP
Add your gpg-id to the .gpg-id file in secrets and let somebody reencrypt it for you.
Maybe this works for you, maybe not. I did it somehow:
```bash
PASSWORD_STORE_DIR=`pwd` tr '\n' ' ' < .gpg-id | xargs -I{} pass init {}
```
Your gpg key has to have the Authenticate flag set. If not update it and push it to a keyserver and wait.
This is necessary, so you can login to any machine with your gpg key.
## Laptops / Desktops
This repository contains a NixOS module that can be used with personal machines
as well. This module appends `/etc/ssh/ssh_known_hosts` with the host keys of
registered HQ hosts, and optionally appends `/etc/hosts` with static IPv6
addresses local to HQ. Simply import the `lib` directory to use the module. As
an example:
```nix
# /etc/nixos/configuration.nix
{ config, pkgs, lib, ... }:
let
# Using a flake is recommended instead
c3d2Config = builtins.fetchGit { url = "https://gitea.c3d2.de/C3D2/nix-config.git"; };
in {
imports = [
"${c3d2Config}/modules/c3d2.nix"
];
c3d2 = {
...
};
}
```
## Server zfs setup
For the other steps follow https://nixos.org/manual/nixos/unstable/index.html#sec-installation
```
sgdisk --zap-all /dev/sda
parted /dev/sda -- mklabel gpt
parted /dev/sda -- mkpart primary 512MB -40GB
parted /dev/sda -- mkpart primary linux-swap -40GB 100%
parted /dev/sda -- mkpart ESP fat32 1MB 512MB
parted /dev/sda -- set 3 esp on
mkswap -L swap /dev/sda2
mkfs.fat -F 32 -n boot /dev/sda3
pool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
hydra /dev/sda1
zfs create -o canmount=on -o mountpoint=/ hydra/nixos
zfs create -o canmount=on -o mountpoint=/nix hydra/nixos/nix
zfs create -o canmount=on -o atime=off -o mountpoint=/nix/store hydra/nixos/nix/store
zfs create -o canmount=on -o mountpoint=/nix/var hydra/nixos/nix/var
zfs create -o canmount=off -o mountpoint=none hydra/data
zfs create -o canmount=on -o mountpoint=/etc hydra/data/etc
zfs create -o canmount=on -o mountpoint=/var hydra/data/var
zfs create -o canmount=on -o mountpoint=/var/backup hydra/data/var/backup
zfs create -o canmount=on -o mountpoint=/var/lib hydra/data/var/lib
zfs create -o canmount=on -o mountpoint=/var/log hydra/data/var/log
zfs create -o canmount=on -o mountpoint=/home hydra/data/home
zfs create -o canmount=off -o mountpoint=none -o refreservation=1G hydra/reserved
```