Proxmox 4.2 is now available

  • 1

Proxmox 4.2 is now available

Category : Tech News

Get Social!

proxmox logo gradA new version of Proxmox has been released building on the solid foundations laid down with the version 4 branch of Proxmox VE. The latest version 4.2 brings a new GUI, integration with Let’s Encrypt, updated packages and countless bug fixes.

You can download the ISO from https://www.proxmox.com/en/downloads/category/iso-images-pve

Highlights of the 4.2 release:

  • Based on Debian 8.4
  • New GUI using Sencha Ext JS 6
  • Let’s Encrypt support
  • Numerous LXC updates
    • Ability to add network rate limits
    • Add mount points via the GUI
    • Improved backups

Upgrade

Before updating, make sure all your VM’s have been stopped, both LXC and KVM. Ensure you have the required repository entries for apt-get. You’ll either need a valid license key or to add the less stable pve-no-subscription repository. See Proxmox 3.1 package/ updates manager (this also works for version 4.x) for more information.

Run the below commands on each server in your cluster.

Restart all Proxmox servers to complete the installation.


  • 10

Proxmox 4.x bind mount – mount storage in an LXC container

Get Social!

An LXC containers storage is simple to set and maintain and is usually done through either a Web based GUI or a command line utility. It’s simple to set the size of disk allocated to an LXC container, and you can increase it easily, even while the container is still running.

Whilst simple to set up and administer, the standard storage options of LXC containers are limited. For example, you can’t mount an NFS share in an LXC container, or can you have multiple disks mounted as /dev block devices.

That’s where a bind mount comes in. You can add one or more mount points to your LXC container config that specifies a source path and a target path which is activated when the container starts. The source path would be a location on the host machine (the physical host running the LXC container – the Proxmox host in this example). The target is a location inside of the LXC container such as /mnt/myshare. This means that you can mount an NFS share, a GlusterFS share, several physical disks or anything else that can be mounted on your host and pass it through to your container.

Before you start, you’ll need to make sure both the host location and the target container location exist, otherwise the container will fail to start. You’ll then need to edit your LXC container config file. On Proxmox 4.x this can be found in /etc/pve/lxc/ and then the ID of your container. In this example the container we’re working on has an ID of 101.

Add the following row and substitute SOURCE with the path that you’d like to pass through to your container and TARGET to the path inside the container.

The below example will make /mnt/pve/nfs-share available in the container at /mnt/nfs.

Then restart your CT for the changes to take effect.

 

If you have multiple paths to mount then you can increment mp0 to mp1mp2, etc.

If you’re using version 3.x of Proxmox, or stand alone OpenVZ then see Proxmox bind mount for OpenVZ.


  • 4

qcow2 Disk Images and Performance

Get Social!

qcow2 is a virtual disk image format developed by the guys who created QEMU and is one of the most versatile virtual disk formats available. It’s the default and preferred virtual disk format for the Proxmox VE hypervisor and should be used for most virtual machines.

qcow2 offers the following features :

  • Sparse space allocation which means that the entire virtual disk size doesn’t need to be allocated on the hard drive when it’s created. Only the physical space needed by actual data stored to the virtual disk is required.
  • Snapshots can be stored and rolled back to thanks to the copy-on-write process which is used to write to qcow2 files.
  • Linked or chained files can be used. For example, a read only base file could be used to hold ‘system’ files (a gold plate image, if you will), and any changes could be written to an additional file leaving the original intact and unchanged. Multiple machines could use this base file at once, therefore reducing space requirements.
  • AES encryption can be used to encrypt all data at rest.
  • Compression, based on zlib, to reduce physical space requirements and reduce read bytes.

Because of all these features, qcow2 files have a processing overhead, when compared to raw files, in that any data read or written to a qcow2 virtual disk would have to go through a process that could slow the read or write operations. This means there is an overhead associated with IO operations on qcow2 files, again, compared to raw type storage that we have to consider when deciding which features to use.

Increase qcow2 Performance

Sparse Space Allocation

Anything stored on a virtual disk has to be, at some point, stored on a physical medium such as a hard disk. In addition to the data, a virtual disk has a small amount of metadata associated with it that is usually stored in the same file. For example, a virtual disk has no physical constraint on how large it can be, like a hard disk, and therefore this is one of the bits of data we need to store in the qcow2 file.

In addition to that, and just like a physical hard drive, data in a qcow2 file is stored in blocks or clusters and a lookup is required to determine what data is in which cluster. Think of this as a shelf full of numbered boxes, and having a book (or index) which tells you what each box number contains. All of this cluster information is also stored within the qcow2 file consuming disk space that is relative to the data capacity of the qcow2 file. For example, a qcow2 file that can store 1GB of data would have a much smaller metadata footprint than a qcow2 file that can store 100GB of data.

virtual-disk-data-size

Anyway, back to sparse files. The idea of a sparse file is to remove the need to allocate the full size of the file to a physical disk. I can, for example, create a qcow2 image with a data capacity of 10GB that will take up just several KBs of physical space until data is saved to the qcow2 image. As data is saved to the qcow2 image, the physical space used by the image will increase (the data has to be stored somewhere, right?). In addition, as will the metadata because each new cluster that’s required by the qcow2 file will have it’s own entry in the metadata section of the file.

qemu-img comes with various options for setting the allocation when creating new disk images.

  • preallocation=metadata – allocates the space required by the metadata but doesn’t allocate any space for the data. This is the quickest to provision but the slowest for guest writes.
  • preallocation=falloc –  allocates space for the metadata and data but marks the blocks as unallocated. This will provision slower than metadata but quicker than full. Guest write performance will be much quicker than metadata and similar to full.
  • preallocation=full – allocates space for the metadata and data and will therefore consume all the physical space that you allocate (not sparse). All empty allocated space will be set as a zero. This is the slowest to provision and will give similar guest write performance to falloc.

Example command:

The performance impact here is when the virtual image needs to grow in order to store new information written to it. For each new write a new cluster will need to be provisioned and a metadata index entry referencing the new cluster. Depending on the above option selected, the OS may have to allocate a new sector for both the index and the data cluster incurring a performance penalty. Once the disk has been expanded (e.g. or preallocation=full) then there is no penalty on assigning a new cluster as all the clusters are already assigned and available.

See qcow2 preallocation for some examples and benchmarks of the above attributes.

Encryption

qcow2 images are not encrypted by default, so not using encryption couldn’t be more simple. Of course, your data will not be encrypted (unless you use some other process on top of the virtual storage layer) but you’ll save all those CPU cycles when reading and writing the data.

Compression

qcow2 is, at best, a bit weird when it comes to compression (encryption works the same way, too!) in that compression is a one time event, or process that you run to compress an existing image. Any data written after this will be stored uncompressed.

The next thing is to understand compression itself – compression (under the right circumstances) will reduce the size of the data stored on disk at the expense of CPU to compress (one off) and decompress (every time the data is accessed) the data. In certain circumstances, compression can result in a quicker read for the process consuming the data, such as where CPU is abundant and IO bandwidth is very small.

As always, testing your scenarios is the best way to understand the impact.


  • 4

Proxmox 4.0 Has Been Released

Category : Tech News

Get Social!

proxmox logo gradThe Proxmox VE team have released the latest version of Proxmox VE, Proxmox VE 4.0.

There are some pretty big changes in Proxmox version 4 which really warrant the new 4.0 version increment. See my other blog post on the initial 4.0 BETA 1 release notes for the bigger picture, or the below bullet points for the latest changes.

  • Improvements to the new container engine based on LXC, especially the integration into the storage model.
  • Migration path from OpenVZ to LXC
  • Linux Kernel 4.2
  • Ceph Server packages (0.94.x – hammer release)
  • Embedded NoVNC console
  • Improved IPv6 support
  • Redesigned HA
  • Countless bug fixes

You can download the ISO from their download site and install it onto a new machine.

Quick Poll

Question: Which type of virtualisation do you use?

Answers:

Loading ... Loading ...

  • 2

Proxmox 4.0 Beta 2 is Now Available

Category : Tech News

Get Social!

proxmox logo gradThe Proxmox VE team have released a new BETA version of their 4.0 branch to the public.

 

There are some pretty big changes in Proxmox v4 which really warrant the new 4.0 version increment. See my other blog post on the initial 4.0 BETA 1 release notes for the bigger picture, or the below bullet points for the latest changes.

  • Improvements to the new container engine based on LXC, especially the integration into the storage model.
  • Migration path from OpenVZ to LXC
  • Linux Kernel 4.2
  • Ceph Server packages (0.94.x – hammer release)
  • Embedded NoVNC console
  • Improved IPv6 support
  • Countless bug fixes

You can download the ISO from their download site and install it onto a new machine. Unfortunately, because it’s still a BETA product there is no upgrade path from 3.4. I’d expect this to change in the near future.

It’s worth noting, however, that the website states that any BETA version of release 4.0 will be upgradeable to the full 4.0 when it’s available.

Quick Poll

Question: Which type of virtualisation do you use?

Answers:

Loading ... Loading ...

  • 0

My Proxmox VE Bash Scripts on GitHub

Category : Knowledge

Get Social!

octocat-githubI’ve created a public repository on GitHub with few Bash scripts which may (or may not) be helpful for anyone administering a Proxmox VE Server.

I’ll add to this repository over time with any new scripts I write. Let me know if they are useful, or if you have any of your own in the comments.

Currently the scripts are:

  • backup-all – will backup all OpenVZ Containers on the Proxmox cluster.
  • ct-disk-space – a print out of the diskspace used by all OpenVZ Containers on the server.
  • migrate – will migrate a Container to a different storage ID.
  • restore-all -will restore all OpenVZ Containers on a cluster to the latest available backup.

Link to GitHub proxmox-scripts repository: https://github.com/JAC2703/proxmox-scripts


Visit our advertisers

Search

Visit our advertisers