Setting Memory Resource Limits With LXC

  • 0

Setting Memory Resource Limits With LXC

Category : How-to

Get Social!

linux_containers_logo

 

Linux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.


  • 0

Setting CPU Resource Limits With LXC

Category : How-to

Get Social!

linux_containers_logoLinux Container (LXC) management is now often dealt with by LXD, the Canonical lead project built on top of LXC.

LXD offers a suite of options for controlling Linux Container resources and setting limits where appropriate. This post will talk about setting constraints on CPU, however other options are available for limiting almost any sort of resource, such as network, disk I/O, memory and so on.

Available Limits

CPU management is done in 1 of 4 ways, depending on your expected workload and host CPU management regime.

  1. Number of CPUs – set the number of CPU cores that LXC can use with this container and automatically distribute CPU time amongst guests when there is competition for CPU time. The value used is an integer, for example 2.
  2. Specific cores – specify specific physical core(s) for the container to use and distribute available CPU time between containers when multiple containers use the same cores.The value used is an integer or range and can be comma separated, for example 2, 0-1 or 0-1,3,5-9.
  3. Capped share – allow a specified percentage of CPU time for the container, or more if it’s available. When the host is not under load then a container can use any available CPU however when there is contention for CPU then the container will be limited to the specified amount. The container will see all host CPU cores (in TOP, for example).
  4. Limited time share – will limit the container CPU time to be whatever is specified out of each 200ms. Even if more CPU is available, only what is specified per 200ms slice is allowed. The container will see all host CPU cores (in TOP, for example).

Setting Limits

Setting limits is done with the lxc command. There are then two options; limits.cpu for the above points 1 and 2, or limit.cpu.allowance for points 3 and 4.

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 1 or 2 above.

OR

  • [CONTAINER] is the name of the container – can be obtained from lxc list if you’re unsure.
  • [VALUE] is a valid value from point 3 or 4 above.

CPU Limit Examples

Set the container nginx-proxy to use any 2 CPUs on the host.

Set the container nginx-proxy to use physical CPU 0, 3, 7, 8 and 9 on the host.

Set the container nginx-proxy to use 20% of the available CPU on the host or more if it’s available.

Set the container nginx-proxy to use no more than 50% of the available CPU on the host, or 100ms for every 200ms of CPU time available.

You can view /proc/cpuinfo to see the available cores on your container, however it will not include any additional scheduling limits or priorities.

CPU Priority

The last option around CPU limiting is the priority of CPU time. This option only kicks in when the host is overcommitted on CPU resource and containers are fighting for CPU time. This can either be on a single core (if using above points 1 or 2) or system wide (if no CPU limiting is in place or using above points 3 or 4).

Available values are 0 – 10 inclusive and lower numbers mean a lower priority – a higher number will mean the machine gets CPU time before lower numbers.

The below command sets the container nginx-proxy to have a CPU priority of 5.

The below command sets the container php-backend to have a CPU priority of 2 and therefore would get less CPU time than container nginx-proxy when CPU is under contention.


  • 0

Bash Command to Check Container Disk Space in Proxmox/ OpenVZ

Get Social!

proxmox logo gradKeeping an eye on all of your containers disk space can be time consuming if done one by one. Luckily, the vzlist command gives us access to many of the data and counters associated with each container.

Running vzlist on it’s own will give a list of the currently running containers and a few extra columns of information such as IP address and hostname.

You might be thinking that the above command doesn’t show anything about containers which are currently turned off, or about disk space. You’d be right!

Adding the –all switch will include all containers, regardless of their running state.

To add the disk space information we add the –output switch followed by the column names we want to display. For disk space, the column name is helpfully named diskspace so the command will look like this:

We now see the disk space of all of our containers, but with a problem. We have no idea (although in this example we do because we only have one container) which container the disk space output is for. We need to add a few more columns to the –output switch such as ctid and hostname.  (see below for a complete list of output columns)

The last trick here is to add a sort using the –sort switch and the column name

Let’s put all that together, plus a spot of awk magic to get a nice disk full percentage that we can work with:

Output:

Complete list of vzlist column headings

You can get a complete list of vzlist column headings with the following command:

 


  • 9

Create Your First Docker Container

Category : How-to

Get Social!

docker-logoDocker is probably one of the easiest environments to create a virtualised instance based on a number of flavours of operating systems. Rather that having to install an operating system yourself, you can download one of the many guests templates or ‘images’ available directly from the Docker community.

See my blog post on installing Docker on Ubuntu 14.04 if you don’t currently have Docker installed.

There are a number of commands which are required to manage Docker containers and images. First off, let’s see if we have any images in our local Docker library.

The docker images command lists the available local images which you can use to create a Docker container. The above output does not show any local images so lets download one from the central Docker repository.

We must choose which image to download and use to create our first Docker container. There are literally thousands of images available on the central repository and all can be downloaded through the docker command. Let’s use the search command to find an image to download.

This will display a huge list of all the images available containing the word ubuntu. As you can imagine, there will be hundreds because not only are base OS images available, but customised images containing specific applications or set ups.

Let’s download the basic ubuntu 14.04 image:

You can check this has downloaded the image to your local store with the above docker images command. We will also need to make a note of the image ID so that we can use it to create a container from it.

The next step is to create a container and make the required changes. Creating a container is Docker is done with the run command followed by, amongst other things, a command to run within the container. We are going to create a new container and use a bash session to customise the container before saving it as a new image for use in the future.

Create the Docker container with the run command and specify the bash shell to be executed on completion. This will leave us with a bash session which we can use the customise the image. Replace the ad892dd21d60 ID with the ID of the image we downloaded in the previous step.

You now have an active shell on the container which has been created with the id 3a09b2588478. Type exit to end the session in your guest container and the container will be stopped and kept available on your Docker system.

Run the ps Docker command to see what containers are known to your Docker system.

The above output shows 3 containers which are available in my Docker system with the container ID on the left. We can re-enter one of these containers to make our changes, but first we need to start it. I’m going to use container ID 3a09b2588478 for the rest of this example but yours will be a different ID.

We can now attach to the container to create a shell where we can make our modifications.

You now have a shell running on the container which you can use to make your changes to the container. Let’s keep it simple and just run an upgrade with apt-get and then exit. In the real world, you might install an application, or define your configuration such as LDAP SSH login.

The last step in our example is to save the container as a new image which can be used to create future Docker containers. You’ll need to specify the container ID as well as the name of the image to use. You can specify a new image name or overwrite the existing image name.

 

And that’s all there is to it! You have created a new Docker container, from one of the images available from Docker, made some changes and saved it locally for future use. Of cause, there are plenty more ways to use Docker, but I hope this has been useful for getting a basic understanding of how Docker works.

Next steps: See my post on using a Dockerfile to automate Docker image creation.

Quick Poll

Question: Are you using Docker.io?

Answers:

Loading ... Loading ...

  • 10

Create a Ubuntu 14.04 OpenVZ Template for Proxmox

Get Social!

proxmox logo gradThe latest Ubuntu long term support is now available, called Ubuntu 14.04.

There isn’t currently a template available over on OpenVZ however I’m sure that will be shortly rectified. In the meantime, however, you can use the below steps to create a 14.04 Ubuntu template for OpenVZ/ Proxmox. This template has only been lightly tested so please report any errors as you find them.

This template is BETA, please report any problems in the comments.

Vistit Downloads Page

 

You can download a pre-created VM from here directly, or you can create your own using the below notes.

 

Before continuing, this guide assumes that you already have an installation of Ubuntu up and running which you can SSH to. This could be either a KVM or physical machine.

We will use debootstrap to create the template so make sure it’s installed and install it if you haven’t already.

Use debootstrap to download and configure all the required packages to a temporary directory. For this example, we’ll use /tmp/deb.

Copy the below script into the tmp directory of the template root which has just been created. For this example you’ll need to copy the text into this path:

Make the script runnable which chmod.

Run the above script using the chroot command to set up the template.

The script will now run and set up the template using /tmp/deb/ as the templates root.

Once completed, create an archive of the template root device and install it on your OpenVZ/ Proxmox server.

Copy the /tmp/ubuntu-14.04-x86_64-initial1.tar.gz file to your cache directory of your Proxmox install and create your first Ubuntu 14.04 container!

proxmox-ubuntu-1404-template


  • 0

Basic OpenVZ Container Management

Category : How-to

Get Social!

OpenVZ containers, or CT for short, work on the premise of using a template as the starting point for each virtual instance. The Template usually holds the basic applications, such as an SSH server, to create a functioning running instance.

See my blog post on Setting up an OpenVZ server for information on creating an OpenVZ server.

Download an OpenVZ Template

OpenVZ hosts a suite of OpenVZ templates for all common Linux distributions. Before you can create a CT you will need to choose a template from the below site and download it to your OpenVZ server. The template must be saved in your servers template directory, by default this is /vz/template/cache.

Download your templates from: http://wiki.openvz.org/Download/template/precreated

I will use the Debian 7 template for this example.

Create your first OpenVZ container

Most of the administration tasks for a container, such as creating and destroying a container, use the vzctl command.

Before we create our first container we need to understand the components of the create command. This is an example of a basic create command:

  • 200 is the container ID. This is a unique ID which represents the CT being created. We will use this ID later when we start and stop the machine.  You can use virtually any number, but we usually use a three digit number.
  • –ostemplate is the template file name which we downloaded in the previous section. This file, with a tar.gz extention, must exist in the template directory of your OpenVZ server. By default, the templates directory is /vz/template/cache.
  • –config is what decides how much RAM, disk and other properties the CT will assume. OpenVZ have created some example configurations for us, basic being one of them.

Example:

List available OpenVZ containers

Now that we have created a container, we can list the details with vzlist. Running this command will list all turned on containers so we’ll need the -a switch to list the turned off ones as well.

Start/ Stop/ Restart an OpenVZ container

Starting, stopping and restarting a container are done via the vzctl command with either start, stop or restart and the container ID.

Configure an OpenVZ container

There are many configuration options for an OpenVZ container which specify disk space, SWAP, networking, CPU and plenty of others. I’ll cover a few basic options here.

Networking is a common option, and something we will need in our OpenVZ container. First lets set the hostname to example.jamescoyle.net.

Next, let’s add an IP address on the same range as our host. Make sure you check that this IP is free and not already in use by another machine on your network.

The last part of our basic network configuration is to add some nameservers so that DNS entries can be resolved. I’ll use the Google nameservers for this example but you may also wish to include your own local servers.

Execute commands in an OpenVZ container

Using this current example, we can’t connect to our container because we haven’t set a root password. We need to issue the passwd command inside the container and type a password.

There are two ways to do this. We can enter the container directly from the host which will give us a shell running on the container itself.

You can then issue any further commands you require.

The other option is to run a single command from the host using vzctl exec followed by the command to execute.

Remove an OpenVZ container

Before you remove an OpenVZ container, you must make sure it is stopped. Once the container is stopped, you can use the vzctl destroy command to delete the container permanently.

Be careful, there is no confirmation for the destroy command!


Visit our advertisers

Search

Visit our advertisers