Move Proxmox Container to Different Storage (Updated for LXC)

  • 0

Move Proxmox Container to Different Storage (Updated for LXC)

Get Social!

2015-03-05 00_18_04-Proxmox Virtual Environment storageThe Proxmox Web GUI does not give us the ability to migrate a container from one storage device to another directly. To move a container onto different storage we have to take a backup of the container and restore it to the same ID with a different storage device specified. This can be time laborious when working with several containers.

This is an update to the OpenVZ script found here.

The below script allows you to move an LXC container from one storage device to another. The process requires that the container be stopped, which the script will handle.

Save the below script into a file called migrate.

Set execution permissions on the script:

The script has several parameters which are detailed below:

  • -d is specified if you would like the script to delete the temporary backup after the process has completed. Leave this out if you would like the backup tar file to be kept, just in case anything goes wrong.
  • -s is required to specify the name of the target storage. You can find this from the Proxmox Web GUI.
  • -c is required for the container ID to migrate.

In addition, the script contains the variable TMP. This will be the location of the backup tar created as part of the migration process and must contain enough space to store the content of the container being migrated. You can change this to suit your environment.

Example command:

 


  • 0

Reduce Proxmox LXC Backup Size and Time

Category : How-to

Get Social!

Proxmox backs up guests byte-for-byte in a compressed archive. Looking at LXC backups specifically, the file system is compressed into the target backup file with just a few exceptions – temp files aren’t included. You can also add your own exceptions by editing the vzdump.conf to exclude specific file patterns.

All that said, one of the biggest disk space wasters is the cache directory for apt which caches the installation packages for software you have installed. This can generally be safely removed on internet connected machines which will reduce your overall backup size.

For example, a newly created Debian LXC that’s been recently updated shows a total of 206MB of disk used.

After clearing this with the command apt-get clean we can see the space has mostly been freed.

Considering this whole container is only consuming approximately 1GB of disk space, 200MB is quite significant.

vzdump hooks

Now we can see how much space we can save, we need to make Proxmox issue the apt-get clean command before it creates the backup of our container.

vzdump, the utility which Proxmox uses to perform backups has the ability to call a script for various stages of the backup process – these stages are:

  • backup-start
  • backup-end
  • backup-abort
  • log-end
  • pre-stop
  • pre-restart
  • post-restart

We can use these hooks to run our own commands at any of these points of the backup. For the goal of this blog post, we want to run the apt-get clean command at the point of backup-start.

Create a script on your Proxmox host with the following content:

Now edit your vzdump.conf file and add the following line to point to your new script. Remember to change the location of where your script is – I’ve just saved mine in /root/.

 


  • 0

Setting Memory Resource Limits With LXC

Category : How-to

Get Social!

linux_containers_logo

 

Linux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.


  • 0

LXC 2.x/ LXD Cheat Sheet

Tags :

Category : Cheat Sheets

Get Social!

linux_containers_logoHere are some of the most used commands for creating and managing an LXC/ LXD host and containers. It’s assumed that you have a working environment and a privileged SSH connection to the LXC server for issuing the commands.

Basics

Start and Stop a LXC Container

Starting out with the basics here – starting and stopping an LXC container.

List Containers

Display a list of container details for started and stopped containers. The name field is what’s usually used in other commands to reference the specific container.

Create Container from Image

There are further details below on managing images and remote image repository, which you’ll need when creating a new container.

This example will create a new container and start it using the Ubuntu 1604 template. Change [CONTAINER] to be the name of the new container.

Delete Container

Removing a container cannot be undone – beware!

Images

Linux Containers are created from templates or images that are stored locally or downloaded from remote servers.

List Image Repositories

Local and Remote LXC servers and remote image servers can be added to your LXC installation and can be used to download images from when required. Run the below command to see what sources you have.

List available images

Images that have been downloaded, imported or cached are stored locally in the image repository. The output will list the image name, size and various other details.

Remote images that reside on an image repository or remote LXC server can also be listed. This is great for seeing what images are available when creating new containers. Change [REMOTE_NAME] to be the name of the image repository from the image list command. Note: you’ll need to keep the : symbol at the end.

Get image details

Further details can be obtained from an image file than what’s displayed with image list. The below command will detail all information known about the image. Replace [IMAGE_NAME] with a valid image name displayed in the image list command, such as ubuntu-xenial.

Add a new Image Repository

There are various public image repositories that can be added to your LXC installation. LinuxContainers.org is a common one and hosts several distribution types. Replace [NAME] with the text name you’d like to give to the repository (it’s just an alias) and [HOST] with the address of the repository.

For example

Delete a local image

Replace [IMAGE_NAME] with the the alias or fingerprint of the image.

Create new Image from Running Container

You can create a new image from an existing container with a simple command however it’s important to ensure that the created template will contain everything that the running container contained – such as SSH keys, data, etc. It’s therefore important to ensure you clean up anything which may be sensitve before running this command.

You’ll need to change [CONTAINER] to your Linux container name and [ALIAS] to the name you’d like to use for your new image.

Configuration

All the below instructions will assume you’re referring to a container alias called [CONTAINER]. You’ll need to replace this, wherever it’s seen, with the name of the Linux Container you’re acting on.

And config command using set can be altered to use get to retrieve what the current setting is. If the get returns nothing then it means it has not been manually set and the default value will be used.

Auto Start Container

Set the container to start automatically when the LXC service starts – usually at host boot time. Use to enable and 0 to disable.

You can also use boot.autostart.delay to set a delay in seconds after starting this container, before starting the next.

Start up can be ordered using lxc.autostart.order to prioritise which containers are started first. Higher numbers are started first.

CPU Limits

See CPU Resource Limits for more information on constraining CPU resources.


  • 3

Setting CPU Resource Limits With LXC

Category : How-to

Get Social!

linux_containers_logoLinux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.


  • 13

Proxmox 4.x bind mount – mount storage in an LXC container

Get Social!

An LXC containers storage is simple to set and maintain and is usually done through either a Web based GUI or a command line utility. It’s simple to set the size of disk allocated to an LXC container, and you can increase it easily, even while the container is still running.

Whilst simple to set up and administer, the standard storage options of LXC containers are limited. For example, you can’t mount an NFS share in an LXC container, or can you have multiple disks mounted as /dev block devices.

That’s where a bind mount comes in. You can add one or more mount points to your LXC container config that specifies a source path and a target path which is activated when the container starts. The source path would be a location on the host machine (the physical host running the LXC container – the Proxmox host in this example). The target is a location inside of the LXC container such as /mnt/myshare. This means that you can mount an NFS share, a GlusterFS share, several physical disks or anything else that can be mounted on your host and pass it through to your container.

Before you start, you’ll need to make sure both the host location and the target container location exist, otherwise the container will fail to start. You’ll then need to edit your LXC container config file. On Proxmox 4.x this can be found in /etc/pve/lxc/ and then the ID of your container. In this example the container we’re working on has an ID of 101.

Add the following row and substitute SOURCE with the path that you’d like to pass through to your container and TARGET to the path inside the container.

The below example will make /mnt/pve/nfs-share available in the container at /mnt/nfs.

Then restart your CT for the changes to take effect.

 

If you have multiple paths to mount then you can increment mp0 to mp1mp2, etc.

If you’re using version 3.x of Proxmox, or stand alone OpenVZ then see Proxmox bind mount for OpenVZ.