Create a ZFS volume on Ubuntu

Create a ZFS volume on Ubuntu

Category : How-to

Get Social!

zfs-linuxZFS is a disk and logical volume manager combining raid like functionality with guaranteeing data integrity. Every block of data read by ZFS is checksumed and recovered if an error is found. ZFS also periodically checks the entire file system for any silent corruption which may have occurred since the data was written.

ZFS was initially developed by Sun for use in Solaris and as such was not available on Linux distributions. Thanks to some clever guys over at ZFS on Linux, this has now changed. We can now install the ZFS on most Linux distributions such as Debain/ Ubuntu and Red Hat/ CentOS.

ZFS provides a data volume which can have multiple mount points, spanning multiple disks. Disks can be combined into virtual groups to allow for various redundancy options:

  • Mirror – data will be mirrored across disks, equivalent to RAID 1. This is quite simply a copy of one disk to another every time data is changed. You require a minimum of two disks for a mirrored set. This provides the best redundancy but requires the most space. For example, if you use 2x 500GB disks, only 500GB will be available as the other 500GB will be a copy of the first disk.
  • Stripe – data will be stored across all available disks, equivalent to RAID 0. In a two disk striped array, half of a file would be on disk one and half of the file on disk two. This provides the fastest read and write speeds but it offers no redundancy. In the event of a failed disk, all data on the stripe will be lost.
  • RAID-Z – data will be written to all but one of the disks, with the remaining disk used for parity. This is equivalent to RAID 5. A minimum of three disks are required with one disk always being used for parity. In the even of a single disk failure, all data can be recovered and in fact, will still be accessible providing no further disks fail. In the even of a second disk failure, all data on the RAIDZ will be lost.
  • RAID-Z 2 and RAID-Z 3 – these are the same as RAIDZ but with two and three disks used for parity respectively. RAID-Z 3 is recommended for highly critical data consistency environments. RAIDZ-2 requires a minimum of 4 disks, and RAID-Z 3 requires 5 disks as a minimum.

zfs highlevel structure diagram

In addition to these virtual groups, multiple groups can be combined. For example, you can mirror a striped virtual volume to create a RAID 10. This gives the added performance of striped volumes with the redundancy of mirrored volumes.

For our below example, we are going to create a single RAIDZ 1 with three disks. This gives us two full disks of storage, and a further disk for parity.

Installing ZFS on Ubuntu

Before we can start using ZFS, we need to install it. Simply add the repository to apt-get with the following command:

apt-add-repository --yes ppa:zfs-native/stable

In a minimum package install, you may not have the apt-add-repository installed.

The program 'apt-add-repository' is currently not installed.  You can install it by typing:
apt-get install python-software-properties

If this is the case, install it before running the apt-add-repository command.

apt-get install python-software-properties

Update the apt cache with the update argument

apt-get update

Install the ZFS binaries, tools and kernel modules. This may take a while due to the amount of packages apt will have to download, building the tools and the ZFS modules for the kernel.

apt-get install ubuntu-zfs

At this point, it is best to test the kernel was correctly compiled and loaded.

dmesg | grep ZFS

The output should look like below. If it does not try running modprobe zfs.

[  824.725076] ZFS: Loaded module v0.6.1-rc14, ZFS pool version 5000, ZFS filesystem version 5

Create RAID-Z 1 3 disk array

Once ZFS is installed, we can create a virtual volume of our three disks. The three disks should all be the same size, if they are not the smallest disk’s size will be used on all three disks.

Identify the disks you would like to use with fdisk. Some disk controllers may have their own naming conventions and administration tools but we’ll use fdisk in this example. Whilst we are on this point, raid controllers should not be set up with raid functionality when using ZFS. Some of the mechanisms in ZFS can be fooled with an underlying layer also doing data parity and therefore data corruption can occur in this environment.

fdisk -l | grep /dev/

The output will look like:

Disk /dev/vdb doesn't contain a valid partition table
Disk /dev/vdc doesn't contain a valid partition table
Disk /dev/vdd doesn't contain a valid partition table

And there we have it! The three disks to add to our ZFS array. Note, I have removed the root volume in this example to avoid confusion.

Run the zpool create command passing in the disks to use for the array as arguments. By specifying the argument -f it removes the need to create partitions on the disks prior to creating the array. This command creates a zpool called datastore however you can change this to suit your needs.

zpool create -f datastore raidz /dev/vdb /dev/vdc /dev/vdd

Confirm the zpool has been created with:

zpool status datastore

The output should be similar to:

  pool: datastore
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        datastore   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vdb1    ONLINE       0     0     0
            vdc1    ONLINE       0     0     0
            vdd1    ONLINE       0     0     0

errors: No known data errors

Create ZFS dataset

At this point, we now have a zpool spanning three disks. One of these is used for parity, giving us the chance to recover in the event of a single disk failure. The next step is to make the volume usable and add features such as compression, encryption or de-duplication.

Multiple datasets or mount points can be created on a single volume. Generally, you do not specify these size of these. Put simply, the storage of the zpool with be available to any dataset as it requires it. You can set up quotas to manage dataset sizes but that won’t be covered in this tutorial.

What we are interested in is creating three volumes; binaries, homes and backups. These will be mounted at /mnt/binaries, /mnt/homes and /mnt/backups respectively. Using zfs create command, create the three required volumes.

We specify the mount point, zpool and dataset name in the command.

zfs create -o mountpoint=[MOUNT POINT] [ZPOOL NAME]/[DATASET NAME]

Example:

zfs create -o mountpoint=/mnt/binaries datastore/binaries
zfs create -o mountpoint=/mnt/homes datastore/homes
zfs create -o mountpoint=/mnt/backups datastore/backups

Test the datasets have been created with zfs list.

zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
datastore            312K  62.6G  38.6K  /datastore
datastore/backups   38.6K  62.6G  38.6K  /mnt/backups
datastore/binaries  38.6K  62.6G  38.6K  /mnt/binaries
datastore/homes     38.6K  62.6G  38.6K  /mnt/homes

And an ls in /mnt should give us the mount points.

ls /mnt/
backups/   binaries/   homes/

You can now use your mounted datasets as required. You can export them as NFS, CIFS or simply use them as local storage!

See my other posts for compression and encryption. Please note, encryption is not currently available on ZFS for Linux.


11 Comments

Peter Auyeung

1-Apr-2014 at 6:37 am

Hi James,

I am also building a gluster on ZFS and wonder if you hit the issues on space not getting release on zfs after file deletion and the Nfs quota on gluster not able to set.

Thank you for your guidance
Peter

    james.coyle

    1-Apr-2014 at 11:30 am

    Hi Peter,

    I have not used the NFS quota on ZFS but the free space is certainly released as files are added and removed.

Daniel

16-May-2014 at 10:30 pm

Hello… My Server “Xeon 4core, 4gb ram, 1 500gb drive for OS and 3 2TB drive for ZFS Volume. I followed this instruction and everything worked, except the performance of the zfs volume was terrible, inside linux I would only get 67mb/s , over the network on gigabit 40mb/s that was terrible.. so after some research I installed this “apt-get install zfs-initramfs” and restarted my server,.. it messed up my dataset I had to recreate it , but now i get between 250 and 300mb/s and over the network(samba) I max out at 120mb/s . Love my server.. running as a Data Server and Media Server with Plex. works great

    Mrawhimskell

    14-Dec-2015 at 2:45 am

    You definitely need to use more memory to get the best out of ZFS…

Justin Thuet

4-Jan-2016 at 3:49 pm

Excellent Article James. I just wrapped up building a test container. I’m looking at this for my small office storage (I will also admit, I have had a lot of interest in ZFS for years, but just now getting a chance to play). A question, de-duplicating files and performance – what sort of impact have you seen, have you tested it?

I also am going to run through your tutorial on compression and replication :)

Thanks for your time building these, I know how much time these things take. I appreciate it.

    james.coyle

    4-Jan-2016 at 4:55 pm

    Hi Justin, thanks for your comment :)

    ZFS, in my mind at least, is one of the best FSs out there – offering encryption, redundancy, compression, and an east to administer tool-set.

    I have a single pool of 15 x 2TB disks which is used for a bunch of things. I only have about 10GB of files using de-duplication – around 50k of files – and the performance is sufficient for my needs. I don’t have any benchmark results I can publish but my feel for it is that the performance is reduced, but not hugely. It consumes massive amount of RAM, however, so you’ll need to make sure you have the hardware and only use dedupe on data that will benefit from it.

    Once you have ZFS set up, run the below command and multiply the block total by 320 to see how much RAM will be required to de-dupe the files on the pool:
    zdb -S mypoolname

K Mark

27-Jul-2016 at 8:14 pm

how to create volume group(one) out of two disk ?

Ashwin Perti

26-Sep-2016 at 4:35 pm

how to create /dev/vdb /dev/vdc and /dev/vdd

    james.coyle

    26-Sep-2016 at 6:24 pm

    These are your disks.

madhavi

8-Nov-2016 at 8:37 am

hi, request you to share the tutorial on compression and replication. Would like to go through. All of your write ups on zfs gluster and ceph are interesting.

Thanks in advance.

zvol

4-Apr-2018 at 8:34 am

This is not a zfs volume what you show here.

Leave a Reply

Visit our advertisers

Quick Poll

What type of VPN protocol do you use?

Visit our advertisers