Proxmox 4.x bind mount – mount storage in an LXC container

  • 13

Proxmox 4.x bind mount – mount storage in an LXC container

Get Social!

An LXC containers storage is simple to set and maintain and is usually done through either a Web based GUI or a command line utility. It’s simple to set the size of disk allocated to an LXC container, and you can increase it easily, even while the container is still running.

Whilst simple to set up and administer, the standard storage options of LXC containers are limited. For example, you can’t mount an NFS share in an LXC container, or can you have multiple disks mounted as /dev block devices.

That’s where a bind mount comes in. You can add one or more mount points to your LXC container config that specifies a source path and a target path which is activated when the container starts. The source path would be a location on the host machine (the physical host running the LXC container – the Proxmox host in this example). The target is a location inside of the LXC container such as /mnt/myshare. This means that you can mount an NFS share, a GlusterFS share, several physical disks or anything else that can be mounted on your host and pass it through to your container.

Before you start, you’ll need to make sure both the host location and the target container location exist, otherwise the container will fail to start. You’ll then need to edit your LXC container config file. On Proxmox 4.x this can be found in /etc/pve/lxc/ and then the ID of your container. In this example the container we’re working on has an ID of 101.

Add the following row and substitute SOURCE with the path that you’d like to pass through to your container and TARGET to the path inside the container.

The below example will make /mnt/pve/nfs-share available in the container at /mnt/nfs.

Then restart your CT for the changes to take effect.

 

If you have multiple paths to mount then you can increment mp0 to mp1mp2, etc.

If you’re using version 3.x of Proxmox, or stand alone OpenVZ then see Proxmox bind mount for OpenVZ.


13 Comments

Iain

15-Mar-2016 at 10:13 am

running proxmox 4.1-1 and using this method the ct wont start. ‘The container failed to start’ Any ideas?

    james.coyle

    15-Mar-2016 at 10:15 am

    What error message is presented?

    Alan

    15-Mar-2016 at 8:10 pm

    I have run into this error message. In my case, it happened when I did not first create the mount point directory within the container.

    mp0: /mnt/pve/nfs-share, mp=/mnt/nfs
    mp1: /mnt/pve/gluster-share, mp=/mnt/gluster

    So for these examples, be sure to create /mnt/nfs and /mnt/gluster inside the container first.

      jim

      10-Apr-2016 at 4:00 pm

      I have a directory on proxmox /home/shares/allusers/cloud. I have a directory on my container /mnt/storage. I add the following line in /etc/pve/lxc/100.conf

      mp0: /home/shares/allusers/cloud, mp=/mnt/storage

      I restart the container and if fails with “Error: command ‘lxc-start -n 100’ failed: exit code 1

        Elurex

        17-Jun-2016 at 7:00 am

        I have

        mp0: volume=/rpool/storage/administration,mp=/storage/administration

        and it works find with pve 4.2

        by the way, bind mount only works up to mp9 and mp10 will fail to mount

teemu

25-Aug-2016 at 8:16 am

In my case i had to remove the space between , and mp or the container wont start:
mp0: /mnt/storage,mp=/mnt/storage

Lukas

20-Sep-2016 at 5:24 pm

Will this work for VM (qemu)? Maybe there is another way to mount folder there?
I want to mount 30TB RAID storage for VM that will be my backup machine. This storage is on local machine and needs to be mounted under other VMs or CTs as well.

    james.coyle

    21-Sep-2016 at 5:38 pm

    You’ll need to share it on your host using NFS/ CIFS/ etc. and mount it on your client.

Erin

15-Dec-2016 at 5:33 pm

I had problems implementing this until I found this page where it told me you have to append “size=0” to the mp0 line so that it knows you’re passing a directory and not an image. The line I used successfully is

mp0: /mnt/amazon/unifi,mp=/mnt/amazon,size=0

Michael

22-Dec-2016 at 9:09 am

Hi James,

Thanks for your post, i have a question, i would like to check if my NFS mount point inside the lxc container are ok, i’ve found that the path is : /var/lib/vz/images/lxc_id but the folder is empty if i’m checking from the hos.

I would like to write a simple bash script to test if my nfs mount point are ok inside my containers, do you have an idea ? feel free to answer me by mail.

Kind regards !

Ben

31-Mar-2017 at 11:20 pm

I’m running Proxmox VE 4.4-84 and lxc-pve 2.0.7-a. Mount NFS mount point directly to proxmox host with CLI
10.11.108.81:/icedev-nfs01 /mnt/pve/icedev-nfs01

Added the following line to /etc/pve/lxc/107.conf file
mp0: /mnt/pve/icedev-nfs01, mp=/mnt/icedev-nfs01

Back into LXC container, created /mnt/icedev-nfs01 mount point and reboot the CT. The CT back to normal, however, the NSF doesn’t work at all. Any suggestions or help would be greatly appreciated.

Ben

31-Mar-2017 at 11:24 pm

Inside my CT, df -h output.
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 504G 916M 478G 1% /
none 492K 0 492K 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 8.2M 32G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
tmpfs 6.3G 0 6.3G 0% /run/user/26528

Ben

31-Mar-2017 at 11:43 pm

I found a problem since i got a space between the line below
/mnt/pve/icedev-nfs01, mp=/mnt/icedev/nfs01
instead
/mnt/pve/icedev-nfs01,mp=/mnt/icedev/nfs01

James,
Thank you so much for your article.

Leave a Reply