This post details the process I used to create ZFS pools, datasets, and snapshots on Ubuntu Server.
I found the following pages very helpful while going through this process:
To start, I installed the ZFS package with the following command:
Once installed, you can check the version to see if it installed correctly.
Now that ZFS is installed, we can create and configure the pool.
You have various options for configuring ZFS pools that all come different pros and cons. I suggest visiting the links at the top of this post or searching online for the best configuration for your use-case.
- Striped VDEVs (Raid0)
- Mirrored VDEVs (Raid1)
- Striped Mirrored VDEVs (Raid10)
- RAIDz (Raid5)
- RAIDz2 (Raidd6)
- Nested RAIDz (Raid50, Raid60)
I will be using Raid10 in this guide. However, the majority of the steps are the same regardless of your chosen pool configuration.
Creating the Pool
To start, let's list the disks available to use. You can use
fdisk command to
see all available disks.
Or, if you currently have them mounted, you can use the
df command to view
If you're going to use mounted disks, make sure to umount them before creating the pool.
Now that I've identified the disks I want to use and have them unmounted, let's
create the pool. For this example, I will call it
See below for the results of the new ZFS pool named
tank, with a vdev
We can also look at the mounted filesystem to see where the pool is mounted and some quick stats.
Expanding the Pool
If you want to expand this pool, you will need to add a new VDEV to the pool.
Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the
If you're adding disks of different sizes, you'll need to use the
Keep in mind that the max size will be limited to the smallest disk added.
I added two 8TB hard drives and this process took around 10 seconds to complete.
When viewing the pool again, you can see that the pool has now doubled in size. We have 14.3 TB useable space and the same space used for mirroring.
Some disks, such as NTFS-formatted drives, will need to be partitioned and formatted prior to being added to the pool.
Start by identifying the disks you want to format and add to the pool.
I am going to format my
/dev/sdd disks with the
See below for instructions on how to use
fdisk. Here's what I did to create
basic Linux formatted disks:
g: Create GPT partition table
n: Create a new partition, hit Enter for all default options
t: Change partition type to
w: Write the changes to disk and exit
I repeated this process for both disks.
Once the drives are formatted, we can add these disks to the pool.
When we list the pool again, we can see that our size is now updated to
approximately 22TB. This represents my hard drives totalling 45.6TB when
fdisk -l, with a Raid10 configuration using 22TB for mirroring
and 22TB of useable space.
According to ZFS
dataset can refer to "clones, file systems, snapshots, and volumes.
For this guide, I will use the
dataset term to refer to file systems created
under a pool.
tank pool, I am going to create some datasets to help organize my
files. This will give me location to store data rather than simply dumping
everything at the
Once created, you can see these datasets in the output of your pool list:
Next, let's create our first snapshot. We can do this by calling the
command and give it an output name. I will be throwing the current date and time
into my example.
We can list the snapshots in our pool with the following command:
You can always destroy snapshots that are no longer needed:
Once deleted, they will no longer appear in the list:
My Thoughts on ZFS So Far
- I sacrificed 25TB to be able to mirror my data, but I feel more comfortable with the potential to save my data by quickly replacing a disk if I need to.
- The set-up was surprisingly easy and fast.
- Disk I/O is fast as well. I was worried that the data transfer speeds would be slower due to the RAID configuration.
- Media streaming and transcoding has seen no noticeable drop in performance.
- My only limitation really is the number of HDD bays in my server HDD cage.