Proxmox 4.0 Beta 2 is Now Available

Proxmox 4.0 Beta 2 is Now Available

Category : Tech News

Get Social!

proxmox logo gradThe Proxmox VE team have released a new BETA version of their 4.0 branch to the public.


There are some pretty big changes in Proxmox v4 which really warrant the new 4.0 version increment. See my other blog post on the initial 4.0 BETA 1 release notes for the bigger picture, or the below bullet points for the latest changes.

  • Improvements to the new container engine based on LXC, especially the integration into the storage model.
  • Migration path from OpenVZ to LXC
  • Linux Kernel 4.2
  • Ceph Server packages (0.94.x – hammer release)
  • Embedded NoVNC console
  • Improved IPv6 support
  • Countless bug fixes

You can download the ISO from their download site and install it onto a new machine. Unfortunately, because it’s still a BETA product there is no upgrade path from 3.4. I’d expect this to change in the near future.

It’s worth noting, however, that the website states that any BETA version of release 4.0 will be upgradeable to the full 4.0 when it’s available.

Quick Poll

Which type of virtualisation do you use?
  • Add your answer

My Proxmox VE Bash Scripts on GitHub

Category : Knowledge

Get Social!

octocat-githubI’ve created a public repository on GitHub with few Bash scripts which may (or may not) be helpful for anyone administering a Proxmox VE Server.

I’ll add to this repository over time with any new scripts I write. Let me know if they are useful, or if you have any of your own in the comments.

Currently the scripts are:

  • backup-all – will backup all OpenVZ Containers on the Proxmox cluster.
  • ct-disk-space – a print out of the diskspace used by all OpenVZ Containers on the server.
  • migrate – will migrate a Container to a different storage ID.
  • restore-all -will restore all OpenVZ Containers on a cluster to the latest available backup.

Link to GitHub proxmox-scripts repository:

Proxmox 4.0 Beta 1 is Now Available

Category : Tech News

Get Social!

proxmox logo gradThe latest release of Proxmox VE has entered it’s first BETA today with the release of Proxmox VE 4.0 BETA 1.

It’s the first public release of version 4.0 and comes with some pretty big changes!

HA Clusters are now configured using the Proxmox Web GUI and don’t require separate fencing kit – it’s provided out of the box. It’s about time, in my opinion, as plenty of other software manages HA without 3rd party processes to tell you what’s up and what’s down. This is a very welcome addition to any small Proxmox Cluster.

Linux Containers will replace OpenVZ as the Proxmox VE container solution. As you’d expect, LXCs will work with the usual Storage plugins offered by Proxmox, although Sheepdog, Ceph, iSCSI, DRBD and GlusterFS will be in a future release.

There are some other updates, such as the latest DRBD9 packages are shipped, and the whole thing runs on the latest Debian release, Debian Jessie 8.1.

You can download the ISO from their download site and install it onto a new machine. Unfortunately, because it’s still a BETA product there is no upgrade path from 3.4. I’d expect this to change in the near future.

It’s worth noting, however, that the website states that any BETA version of release 4.0 will be upgradeable to the full 4.0 when it’s available.

Quick Poll

Which type of virtualisation do you use?
  • Add your answer

Reverse Proxy Proxmox VNC With Nginx

Get Social!

proxmox logo gradThe noVNC console in the Proxmox Web GUI transfers it’s data through a technology called Websockets. Websockets often work in tandem with normal HTTP traffic and therefore often use the same end point (IP and port). Proxmox uses port 8006 by default for all web traffic; this includes the Web GUI you see and a websockets stream for the VNC console.

If you don’t already have Nginx set up, see my other post on How to reverse proxy the Proxmox Web GUI.

You’ll also need one of the more recent versions of Nginx for this to work.

If you use Nginx to reverse proxy your Proxmox Web GUI already, making it websocket compatible is very easy. In fact, it’s as easy as adding three additional lines to your Nginx config file for the location tag that serves your Proxmox Web GUI.

Open up your sites-available config file for your Proxmox site with a text editor:

vi /etc/nginx/sites-available/

Find the location tag for your site and add the following:

proxy_http_version 1.1;                                                           proxy_set_header Upgrade $http_upgrade;                                           proxy_set_header Connection "upgrade";

The resulting site should look similar to the below:

upstream proxmox {
server {
    listen 80 default_server;
    rewrite ^(.*) https://$host$1 permanent;
server {
    listen 443;
    server_name _;
    ssl on;
    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;
    proxy_redirect off;
    location / {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade"; 
        proxy_pass https://proxmox:8006;


Bash Command to Check Container Disk Space in Proxmox/ OpenVZ

Get Social!

proxmox logo gradKeeping an eye on all of your containers disk space can be time consuming if done one by one. Luckily, the vzlist command gives us access to many of the data and counters associated with each container.

Running vzlist on it’s own will give a list of the currently running containers and a few extra columns of information such as IP address and hostname.

100  27    running

You might be thinking that the above command doesn’t show anything about containers which are currently turned off, or about disk space. You’d be right!

Adding the –all switch will include all containers, regardless of their running state.

To add the disk space information we add the –output switch followed by the column names we want to display. For disk space, the column name is helpfully named diskspace so the command will look like this:

vzlist --all --output diskspace

We now see the disk space of all of our containers, but with a problem. We have no idea (although in this example we do because we only have one container) which container the disk space output is for. We need to add a few more columns to the –output switch such as ctid and hostname.  (see below for a complete list of output columns)

vzlist --all --output ctid,hostname,diskspace
100  1184660

The last trick here is to add a sort using the –sort switch and the column name

vzlist --all --output ctid,hostname,diskspace --sort diskspace
100  1184660

Let’s put all that together, plus a spot of awk magic to get a nice disk full percentage that we can work with:

vzlist --all --output ctid,hostname,diskspace,diskspace.s,diskspace.h --sort diskspace | awk '{if (NR>1) {printf("%-4s %-30s %-10s %-10s %-10s %d\n", $1, $2, $3, $4, $5, $3/$4*100)} else printf("%-4s %-30s %-10s %-10s %-10s %s\n", $1, $2, $3, $4, $5, "PERC_USED")}'


100 1184684 5242880  5767168  22

Complete list of vzlist column headings

You can get a complete list of vzlist column headings with the following command:

vzlist -L
ctid            CTID
veid            CTID
vpsid           CTID
private         PRIVATE
root            ROOT
mount_opts      MOUNT_OPTS
origin_sample   ORIGIN_SAMPLE
hostname        HOSTNAME
name            NAME
smart_name      SMARTNAME
description     DESCRIPTION
ostemplate      OSTEMPLATE
ip              IP_ADDR
nameserver      NAMESERVER
searchdomain    SEARCHDOMAIN
status          STATUS
kmemsize        KMEMSIZE
kmemsize.m      KMEMSIZE.M
kmemsize.b      KMEMSIZE.B
kmemsize.l      KMEMSIZE.L
kmemsize.f      KMEMSIZE.F
lockedpages     LOCKEDP
lockedpages.m   LOCKEDP.M
lockedpages.b   LOCKEDP.B
lockedpages.l   LOCKEDP.L
lockedpages.f   LOCKEDP.F
privvmpages     PRIVVMP
privvmpages.m   PRIVVMP.M
privvmpages.b   PRIVVMP.B
privvmpages.l   PRIVVMP.L
privvmpages.f   PRIVVMP.F
shmpages        SHMP
shmpages.m      SHMP.M
shmpages.b      SHMP.B
shmpages.l      SHMP.L
shmpages.f      SHMP.F
numproc         NPROC
numproc.m       NPROC.M
numproc.b       NPROC.B
numproc.l       NPROC.L
numproc.f       NPROC.F
physpages       PHYSP
physpages.m     PHYSP.M
physpages.b     PHYSP.B
physpages.l     PHYSP.L
physpages.f     PHYSP.F
vmguarpages     VMGUARP
vmguarpages.m   VMGUARP.M
vmguarpages.b   VMGUARP.B
vmguarpages.l   VMGUARP.L
vmguarpages.f   VMGUARP.F
oomguarpages    OOMGUARP
oomguarpages.m  OOMGUARP.M
oomguarpages.b  OOMGUARP.B
oomguarpages.l  OOMGUARP.L
oomguarpages.f  OOMGUARP.F
numtcpsock      NTCPSOCK
numtcpsock.m    NTCPSOCK.M
numtcpsock.b    NTCPSOCK.B
numtcpsock.l    NTCPSOCK.L
numtcpsock.f    NTCPSOCK.F
numflock        NFLOCK
numflock.m      NFLOCK.M
numflock.b      NFLOCK.B
numflock.l      NFLOCK.L
numflock.f      NFLOCK.F
numpty          NPTY
numpty.m        NPTY.M
numpty.b        NPTY.B
numpty.l        NPTY.L
numpty.f        NPTY.F
numsiginfo      NSIGINFO
numsiginfo.m    NSIGINFO.M
numsiginfo.b    NSIGINFO.B
numsiginfo.l    NSIGINFO.L
numsiginfo.f    NSIGINFO.F
tcpsndbuf       TCPSNDB
tcpsndbuf.m     TCPSNDB.M
tcpsndbuf.b     TCPSNDB.B
tcpsndbuf.l     TCPSNDB.L
tcpsndbuf.f     TCPSNDB.F
tcprcvbuf       TCPRCVB
tcprcvbuf.m     TCPRCVB.M
tcprcvbuf.b     TCPRCVB.B
tcprcvbuf.l     TCPRCVB.L
tcprcvbuf.f     TCPRCVB.F
othersockbuf    OTHSOCKB
othersockbuf.m  OTHSOCKB.M
othersockbuf.b  OTHSOCKB.B
othersockbuf.l  OTHSOCKB.L
othersockbuf.f  OTHSOCKB.F
dgramrcvbuf     DGRAMRB
dgramrcvbuf.m   DGRAMRB.M
dgramrcvbuf.b   DGRAMRB.B
dgramrcvbuf.l   DGRAMRB.L
dgramrcvbuf.f   DGRAMRB.F
numothersock    NOTHSOCK
numothersock.m  NOTHSOCK.M
numothersock.b  NOTHSOCK.B
numothersock.l  NOTHSOCK.L
numothersock.f  NOTHSOCK.F
dcachesize      DCACHESZ
dcachesize.m    DCACHESZ.M
dcachesize.b    DCACHESZ.B
dcachesize.l    DCACHESZ.L
dcachesize.f    DCACHESZ.F
numfile         NFILE
numfile.m       NFILE.M
numfile.b       NFILE.B
numfile.l       NFILE.L
numfile.f       NFILE.F
numiptent       NIPTENT
numiptent.m     NIPTENT.M
numiptent.b     NIPTENT.B
numiptent.l     NIPTENT.L
numiptent.f     NIPTENT.F
swappages       SWAPP
swappages.m     SWAPP.M
swappages.b     SWAPP.B
swappages.l     SWAPP.L
swappages.f     SWAPP.F
diskspace       DSPACE
diskspace.s     DSPACE.S
diskspace.h     DSPACE.H
diskinodes      DINODES
diskinodes.s    DINODES.S
diskinodes.h    DINODES.H
laverage        LAVERAGE
uptime          UPTIME
cpulimit        CPULIM
cpuunits        CPUUNI
cpus            CPUS
ioprio          IOP
onboot          ONBOOT
bootorder       BOOTORDER
layout          LAYOUT
features        FEATURES
vswap           VSWAP
disabled        DISABL


Proxmox Error: (warning) Incorrect quota shutdown for id xxx recalculating disk usage

Category : How-to

Get Social!

openvz-logo-150px_new_3I came across this error today when trying to manually backup the container to create an OpenVZ container template.

It occurred, in my case, when trying to start the container VMID 600 after been playing around within it’s root folder from the host’s terminal. The following error was presented:

Starting container ...
vzquota : (warning) Incorrect quota shutdown for id 600, recalculating disk usage
vzquota : (error) quota check : lstat `EelIMf': Input/output error
vzquota on failed [1]
TASK ERROR: command 'vzctl start 600' failed: exit code 60

To fix the issue, we need to turn off the quota for the VMID and then back on again.

vzquota off 600

The output should look like below:

vzquota : (error) Quota is not running for id 600
vzquota : (warning) Repairing quota: it was incorrectly marked as running for id 600

You then need to turn the quota back on, to ensure your container’s disks are restricted to the size you have set for it.

vzquota on 600

You should now be able to successfully start the container, error free!

Visit our advertisers

Quick Poll

Do you use GlusterFS in your workplace?

Visit our advertisers