Debian base workstation setup
November 14, 2024
I’m setting up my workstation with a fresh install. This is a record of the details for future replication/repair. This workstation is intended to be used (as it has before) for heavy programming work, and I expect it to become very specialized to that task. I want to keep the system simple to administer, but also have quite a particular userspace.
The aim here is to end up with a Debian system with ext4
root filesystem, and a mirrored pair of hard disks with ZFS.
All disks are encrypted with LUKS.
Additional software will be added to get to a reasonable development environment, but this page will contain just the details of getting to a basic minimal install.
Using ZFS means we need to do a fair bit by hand. This page is based off my notes from the install - odd things may be missing (e.g. package installs) - use common sense.
My first impressions of ZFS have left me pleased. The tooling is easy to grok, and looks powerful.
Preparation
Useful reference material
- Arch Linux documentation for LVM on LUKS - useful for LVM setup, as well as bootloader configuration.
- ZFS on Debian Bookworm official documentation - generally useful guide.
- Alternate guide to installing Debian on ZFS - gives a useful cross comparison with the above
- Debian ZFS documentation - information about ZFS on Debian.
Installation media
Get the live environment and copy it to a USB stick. Since this will use a manual bootstrap don’t use the netinstall media - the live environment boots into a much more usable system. We need to install ZFS utilities to setup the disks.
A live system with just a shell would be ideal, but there we have it. I’m using the XFCE variant because it’ll do just fine with minimal nonsense.
Boot into the live environment.
Install extra packages
Add the contrib
source to /etc/apt/sources.list
and run sudo apt update
.
$ sudo apt -y install debootstrap zfs-dkms zfs-zed zfsutils-linux cryptsetup
Load the zfs modules
$ sudo modprobe zfs
Verify.
$ zfs version
Bootstrap
Disk setup - root drive
Logical partitioning
The first part of the setup consists of partitioning and setting up the various disks with the right filesystems.
For the main SSD, setup (I used fdisk
):
1. A new GPT partition table.
2. Add a 1GB ESP partition.
3. Make the rest a single partition (this will be a Linux LUKS container, but can just be marked Linux filesystem in GPT partition types).
fdisk
has an interactive shell to help do this.
Note that this install is using an unencrypted boot partition.
Now create a LUKS container on the second (non-ESP) partition, open it, and setup LVM2 structures on the mapped container.
$ sudo pvcreate MAPPED_DRIVE
$ sudo vgcreate system MAPPED_DRIVE # replace system with a name of your choice
and then logical volumes (swap space, some reserved space for LVM snapshots, and one for the root filesystem)
$ lvcreate -L 32G -n swap system
$ lvcreate -L 32G -n snap system
$ lvcreate -l 100%FREE -n root system
Reduce the last logical volume to allow e2scrub
to work.
$ lvreduce -L -256M system/root
Note we’ll remove the snap
volume, it’s just there to reserve the space for snapshots without having to do the maths.
Filesystems
$ sudo mkfs.ext4 -L system-root /dev/system/root
$ sudo mkswap -L system-swap /dev/system/swap
(the labels are useful later when setting up /etc/fstab
).
and for the ESP partition
$ mkfs.fat -F32 /dev/EFI_PARTITION
Disk setup - ZFS drives
These will be used for user home directories, at the least.
I used a LUKS container for the whole drive (no partitioning), for the two drives I wanted in my pool.
This means that 3 unlocks will initially be needed on boot, but keyfiles can be added later to make it necessary to only unlock root.
Open these LUKS containers.
Now create the ZFS pool from these containers. I’m calling my pool datapool
.
$ sudo zpool create \
-o ashift=12 \
-O acltype=posixacl \
-O xattr=sa \
-O dnodesize=auto \
-O compression=lz4 \
-O relatime=on \
-O canmount=off \
-O mountpoint=/ \
datapool mirror LUKS_DEVICES
Compared with the official documentation for installing Debian 12 on ZFS, I didn’t enable autotrim (these aren’t SSDs), and I used no normalization of names (which enforces UTF-8 for filepaths).
ZFS has its own native encryption - I wanted the entire disk encrypted, and from the measurements I’d seen using a LUKS container is only marginally slower.
Create a home
container (note this mounts the new filesystem to /home
).
$ sudo zfs create -o canmount=on -o mountpoint=/home datapool/home
Export and reimport the pool (-N
means don’t mount, -R /mnt
bases the pool off /mnt
).
$ sudo zpool export datapool
$ sudo zpool import -N -R /mnt datapool
Note: I forgot to set the mountpoint of the pool originally - since the pool can’t be mounted anyway, this isn’t a major issue.
Nonetheless, it’s useful to have the pool zfs volumn reflect the default root base for the pool. The mountpoint can be changed (fairly dynamically), using zfs set mountpoint=/ datapool
, for example.
Bootstrap
Mount all the filesystems onto /mnt
.
$ sudo mount /dev/mapper/system-root /mnt
$ sudo mkdir /mnt/boot
$ sudo mount ESP_PARTITION /mnt/boot
$ sudo zfs mount datapool/home
Verify the filesystems are mounted correctly under /mnt
.
Unclear if this is necessary, but it doesn’t hurt
$ sudo udevadm trigger
Perform the bootstrap
$ sudo debootstrap bookworm /mnt
Basic setup in chroot
Most setup can be done once the system is booted, the main things to get done here are
- Install a kernel.
- Install packages and configure the system to be able to mount the disks.
- Install a bootloader.
Rescue can always be done from the live CD by unlocking and remounting disks, and dropping back into the chroot.
For example, I forgot to install lvm2
on my first pass, and had to go back and do it.
Move to chroot
$ sudo cp /etc/hostid /mnt/etc/
$ sudo cp /etc/resolv.conf /mnt/etc/
Mount filesystems needed for chroot
$ sudo mount -t proc proc /mnt/proc
$ sudo mount -t sysfs sys /mnt/sys
$ sudo mount -B /dev /mnt/dev
$ sudo mount -t devpts pts /mnt/dev/pts
And drop into the new environment
$ sudo chroot /mnt /bin/bash
Setup and package installation
All of the below is inside the chroot.
Edit /etc/hostname
and /etc/hosts
.
Setup apt
sources - edit /etc/apt/sources.list
, including backports (from which packages have to be explicitly installed).
deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
deb-src http://deb.debian.org/debian bookworm main contrib non-free-firmware
deb http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
deb-src http://deb.debian.org/debian-security bookworm-security main contrib non-free-firmware
deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
deb http://deb.debian.org/debian/ bookworm-backports main contrib non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm-backports main contrib non-free-firmware
I don’t enable non-free by default. It’s recommended to enable non-free-firmware, although it should be possible to run without it (if you have compatible hardware).
And update the package index
# apt update
The following packages let us get to a minimal bootable system.
# apt install \
linux-image-amd64 \
linux-headers-amd64 \
console-setup \
cryptsetup \
cryptsetup-initramfs \
lvm2 \
dosfstools \
efibootmgr \
locales
and as recommended on the debian wiki, install ZFS from backports
# apt -t bookworm-backports install zfsutils-linux
Enable ZFS services
# systemctl enable zfs.target
# systemctl enable zfs-import-cache
# systemctl enable zfs-mount
# systemctl enable zfs-import.target
Now update /etc/crypttab
, and /etc/fstab
.
The ZFS systemd services should take care of ZFS mounts, so only /
, /boot
and swap
need setting up in /etc/fstab
.
Enable systemd-timesyncd
# apt -y install systemd-timesyncd
# systemctl enable systemd-timesyncd
Configure tzdata
and locales
(ensure en_US.UTF-8
is available, in addition to whatever else, en_GB.UTF-8
in my case).
# dpkg-reconfigure locales
# dpkg-reconfigure tzdata
And enable a tmpfs
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/tmp.mount
Setup your network interface (I’m using a wired adapter).
Edit /etc/network/interfaces.d/INTERFACE
auto INTERFACE
iface INTERFACE inet dhcp
Bootloader installation
Install GRUB
# apt -y install grub-efi-amd64
Probe the /boot
directory and check the output looks reasonable
# grub-probe /boot
Update the initramfs
# update-initramfs -u -k all
It’s recommended in the official documentation to (temporarily) make GRUB (and linux) emit more boot logs to aid any initial debugging - this can be reverted later.
Edit /etc/default/grub
, to remove quiet from GRUB_CMDLINE_LINUX_DEFAULT
, and uncomment GRUB_TERMINAL=console
.
Run
# update-grub
Mount efivars
# mount -t efivarfs efivars /sys/firmware/efi/efivars
and install grub
# grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=debian --recheck --no-floppy
Finally, set a root password ready for the first boot
# passwd
We’ll lock the root account after we’ve got the system up and running, and added a user.
Prepare for first boot
Exit the chroot, and unmount all the disks mounted at /mnt
. Reboot!.
Completing the system
Assuming the disks are unlocked and the system comes up, more than likely the ZFS disks won’t be mounted. Login as root, and re-import the pool
# zpool import datapool
ZFS will cache this information, and all future boots should mount ZFS based filesystems correctly.
Now create a new user - I create a new ZFS container for the home directory
# zfs create datapool/home/USER
Then created the user.
Add the user to the sudo
group.
Logout of the root account, login as the new user, and lock the root account (this tests sudo)
$ sudo passwd -l root
Fix permissions for the new users home directory
sudo chown -R USER:GROUP /home/USER
along with any other restrictions/allowances you wish to make. For example I removed group and other access to the user folder, and all files inside.
Finally, complete the system installation
$ sudo tasksel --new-install
I select everything but the base packages, since I want to set the system up myself.
Finally, we clear the reserved logical volume for the system snapshot, and create a snapshot of the system. This can be used to restore this base install in subsequent steps, and a similar process can be followed to store further checkpoints.
$ sudo lvremove system/snap
$ sudo lvcreate --size=16GB --snapshot -n snap /dev/system/root
LVM snapshots are also useful for creating a stable filesystem for backups - I hope to setup a process for regular copying of snapshots to a) the ZFS pool and b) a remote system for restoration.
This completes the base install.
Conclusion
There is a lot more to do here, and some smoothing to do to make the system nice. However, this gets a basic system setup with ZFS, and a blank canvas to build upon. I intend to write up the steps to a much more usable system in a future post.