How to Create a ZFS Pool on Ubuntu Linux

1375 words; 7 minute(s)

Table of Contents

This post details the process I used to create ZFS pools, datasets, and snapshots on Ubuntu Server.

I found the following pages very helpful while going through this process:


To start, I installed the ZFS package with the following command:

sudo apt install zfsutils-linux

Once installed, you can check the version to see if it installed correctly.

> zsf --version


ZFS Configuration

Now that ZFS is installed, we can create and configure the pool.

You have various options for configuring ZFS pools that all come different pros and cons. I suggest visiting the links at the top of this post or searching online for the best configuration for your use-case.

I will be using Raid10 in this guide. However, the majority of the steps are the same regardless of your chosen pool configuration.

Creating the Pool

To start, let's list the disks available to use. You can use fdisk command to see all available disks.

sudo fdisk -l

Or, if you currently have them mounted, you can use the df command to view your disks.

> sudo df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       7.3T   28K  6.9T   1% /mnt/red-01
/dev/sdb1       7.3T  144G  6.8T   3% /mnt/red-02
/dev/sdc1       7.3T  5.5T  1.9T  75% /mnt/white-02
/dev/sdd1       9.1T  8.7T  435G  96% /mnt/white-01
/dev/sde1       7.3T   28K  6.9T   1% /mnt/red-03
/dev/sdf1       7.3T   28K  6.9T   1% /mnt/red-04

If you're going to use mounted disks, make sure to umount them before creating the pool.

sudo umount /dev/sda1
sudo umount /dev/sdb1

Now that I've identified the disks I want to use and have them unmounted, let's create the pool. For this example, I will call it tank.

sudo zpool create -f -m /mnt/pool tank mirror /dev/sda /dev/sdb

See below for the results of the new ZFS pool named tank, with a vdev automatically named mirror-0.

> zfs list

tank   396K  7.14T       96K  /tank
> zpool status

  pool: tank
 state: ONLINE

	tank        ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0

errors: No known data errors

We can also look at the mounted filesystem to see where the pool is mounted and some quick stats.

> df -h

Filesystem      Size  Used Avail Use% Mounted on
tank            7.2T  128K  7.2T   1% /tank

Expanding the Pool

If you want to expand this pool, you will need to add a new VDEV to the pool. Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the existing tank pool.

sudo zpool add tank mirror /dev/sdX /dev/sdY

If you're adding disks of different sizes, you'll need to use the -f flag. Keep in mind that the max size will be limited to the smallest disk added.

sudo zpool add -f tank mirror /dev/sdX /dev/sdY

I added two 8TB hard drives and this process took around 10 seconds to complete.

When viewing the pool again, you can see that the pool has now doubled in size. We have 14.3 TB useable space and the same space used for mirroring.

> zfs list

tank         145G  14.3T      104K  /tank
tank/cloud   145G  14.3T      145G  /tank/cloud
tank/media    96K  14.3T       96K  /tank/media

Converting Disks

Some disks, such as NTFS-formatted drives, will need to be partitioned and formatted prior to being added to the pool.

Start by identifying the disks you want to format and add to the pool.

sudo fdisk -l | grep /dev

I am going to format my /dev/sdc and /dev/sdd disks with the fdisk command.

See below for instructions on how to use fdisk. Here's what I did to create basic Linux formatted disks:

I repeated this process for both disks.

> sudo fdisk /dev/sdc

Welcome to fdisk (util-linux 2.37.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.

Command (m for help): m


   M   enter protective/hybrid MBR

   d   delete a partition
   F   list free unpartitioned space
   l   list known partition types
   n   add a new partition
   p   print the partition table
   t   change a partition type
   v   verify the partition table
   i   print information about a partition

   m   print this menu
   x   extra functionality (experts only)

   I   load disk layout from sfdisk script file
   O   dump disk layout to sfdisk script file

  Save & Exit
   w   write table to disk and exit
   q   quit without saving changes

  Create a new label
   g   create a new empty GPT partition table
   G   create a new empty SGI (IRIX) partition table
   o   create a new empty DOS partition table
   s   create a new empty Sun partition table

Once the drives are formatted, we can add these disks to the pool.

sudo zpool add tank mirror /dev/sdc /dev/sdd

When we list the pool again, we can see that our size is now updated to approximately 22TB. This represents my hard drives totalling 45.6TB when shown with fdisk -l, with a Raid10 configuration using 22TB for mirroring and 22TB of useable space.

> zfs list

tank         145G  21.7T      104K  /tank
tank/cloud   145G  21.7T      145G  /tank/cloud
tank/media   145GT 21.7T       96K  /tank/media

Creating Datasets

According to ZFS Terminology, a dataset can refer to "clones, file systems, snapshots, and volumes.

For this guide, I will use the dataset term to refer to file systems created under a pool.

Within my tank pool, I am going to create some datasets to help organize my files. This will give me location to store data rather than simply dumping everything at the /tank/ location.

sudo zfs create tank/cloud
sudo zfs create tank/media

Once created, you can see these datasets in the output of your pool list:

> zfs list
tank         752K  7.14T      104K  /tank
tank/cloud    96K  7.14T       96K  /tank/cloud
tank/media    96K  7.14T       96K  /tank/media

Creating Snapshots

Next, let's create our first snapshot. We can do this by calling the snapshot command and give it an output name. I will be throwing the current date and time into my example.

sudo zfs snapshot tank@$(date '+%Y-%m-%d_%H-%M')

We can list the snapshots in our pool with the following command:

> zfs list -t snapshot
NAME                    USED  AVAIL     REFER  MOUNTPOINT
tank@2024-02-06_19-41     0B      -      104K  -

Destroy Snapshots

You can always destroy snapshots that are no longer needed:

sudo zfs destroy tank@2024-02-06_19-41

Once deleted, they will no longer appear in the list:

> zfs list -t snapshot
no datasets available

My Thoughts on ZFS So Far