HTGWA: Create a ZFS RAIDZ1 zpool on a Raspberry Pi

It is a easy information, a part of a collection I am going to name 'How-To Information With out Adverts'. In it, I will doc how I arrange a ZFS zpool in RAIDZ1 in Linux on a Raspberry Pi.

Prequisites

ZFS doesn't get pleasure from USB drives, although it may work on them. I would not actually advocate ZFS for the Pi 4 mannequin B or different Pi fashions that may't use native SATA, NVMe, or SAS drives.

For my very own testing, I'm utilizing a Raspberry Pi Compute Module 4, and there are a variety of PCI Express storage controller cards and carrier boards with built-in storage controllers that make ZFS a lot happier.

I've additionally solely examined ZFS on 64-bit Raspberry Pi OS, on Compute Modules with 4 or 8 GB of RAM. No ensures beneath different configurations.

Putting in ZFS

Since ZFS just isn't bundled with different Debian 'free' software program (due to licensing points), you'll want to set up the kernel headers, then set up two ZFS packages:

$ sudo apt set up raspberrypi-kernel-headers
$ sudo apt set up zfs-dkms zfsutils-linux

Confirm ZFS is loaded

$ dmesg | grep ZFS
[ 5393.504988] ZFS: Loaded module v2.0.2-1~bpo10+1, ZFS pool model 5000, ZFS filesystem model 5

It is best to see one thing just like the above. If not, it may not have loaded appropriately.

Put together the disks

It is best to have a minimum of three drives arrange and able to go. And ensure you do not care about something on them. They're gonna get erased.

Listing all of the gadgets in your system:

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk 
└─sda1        8:1    0  7.3T  0 half /mnt/mydrive
sdb           8:16   0  7.3T  0 disk 
sdc           8:32   0  7.3T  0 disk 
sdd           8:48   0  7.3T  0 disk 
sde           8:64   0  7.3T  0 disk 
nvme0n1     259:0    0  7.3T  0 disk 
└─nvme0n1p1 259:1    0  7.3T  0 half /

I need to put sda by means of sde into the RAIDZ1 quantity. I observed sda already has a partition and a mount. We must always be sure that all of the drives that might be a part of the array are partition-free:

$ sudo umount /dev/sda?; sudo wipefs --all --force /dev/sda?; sudo wipefs --all --force /dev/sda
$ sudo umount /dev/sdb?; sudo wipefs --all --force /dev/sdb?; sudo wipefs --all --force /dev/sdb
...

Try this for every of the drives. If you happen to did not understand it but, this wipes all the pieces. It would not zero the information, so technically it might nonetheless be recovered at this level!

Verify to ensure nothing's mounted (and ensure you have eliminated any of the drives you may use within the array from /and many others/fstab should you had persistent mounts for them in there!):

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  7.3T  0 disk 
sdb           8:16   0  7.3T  0 disk 
sdc           8:32   0  7.3T  0 disk 
sdd           8:48   0  7.3T  0 disk 
sde           8:64   0  7.3T  0 disk 
nvme0n1     259:0    0  7.3T  0 disk 
└─nvme0n1p1 259:1    0  7.3T  0 half /

Wanting good, time to start out constructing the array!

Create a RAIDZ1 zpool

The next command will create a zpool with all of the block gadgets listed:

$ sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f

For manufacturing use, you must actually learn up on the advantages and disadvantages of various RAID ranges in ZFS, and how you can construction zpools and vdevs. The particular construction you must use will depend on what number of and what kind of drives you have got, in addition to your efficiency and redundancy wants.

Confirm the pool is about up appropriately:

$ zfs listing
NAME      USED  AVAIL     REFER  MOUNTPOINT
zfspool   143K  28.1T     35.1K  /zfspool

$ zpool standing -v zfspool
  pool: zfspool
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zfspool     ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sda     ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0

errors: No identified knowledge errors

And ensure it was mounted so Linux can see it:

$ df -h
...
zfspool          29T  128K   29T   1% /zfspool

Destroy a pool

If you happen to now not like swimming within the waters of ZFS, you possibly can destroy the pool you created with:

$ sudo zpool destroy zfspool

Notice: This may wipe out the pool and result in knowledge loss. Ensure you're deleting the appropriate pool and haven't any knowledge inside that you simply care about.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *