Parse Proxmox Web API authentication ticket and the CSRFPreventionToken in Bash

Parse Proxmox Web API authentication ticket and the CSRFPreventionToken in Bash

Category : How-to

Get Social!

The Proxmox Web API can perform any actions available in the front end Web. By implementing a REST API, all commands have been exposed and can be used programatically.

The API is secured using a token based method which provides a ticket that must accompany all API requests except for the request that generates the token. The token is generated from an API call containing a username, password and security realm.

In this example we’ll use Bash to call the Proxmox Web API, authenticate with the root Proxmox user and parse the response for use in later API requests. Note that it’s not good practice to use the root account for API calls due to the security implications.

See this post for an introduction to the Proxmox Web API.

Add this function to the top of your Bash script. This will be used to parse the JSON using standard Bash calls to obtain the information we need.

decodeDataFromJson(){
    echo `echo $1 \
	    | sed 's/{\"data\"\:{//g' \
	    | sed 's/\\\\\//\//g' \
	    | sed 's/[{}]//g' \
            | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' \
	    | sed 's/\"\:\"/\|/g' \
	    | sed 's/[\,]/ /g' \
	    | sed 's/\"// g' \
	    | grep -w $2 \
	    | awk -F "|" '{print $2}'`
}

The next step is to call the Proxmox API using curl to obtain our authentication token. Use the below script and substitute the values as required:

  • PROX_USERNAME is the username and security realm used to log into the Proxmox Web front end. This must be a valid user with the required permission to make the calls you need.
  • PROX_PASSWORD is the password for the above user. You must escape any special characters as usual in Bash.
  • HOST is the host or IP address of the Proxmox server.
PROX_USERNAME=root@pam
PROX_PASSWORD=PASSWORD
HOST=proxmox-host

DATA=`curl -s -k -d "username=$PROX_USERNAME&password=$PROX_PASSWORD" $HOST/api2/json/access/ticket` 
TICKET=$(decodeDataFromJson $DATA 'ticket')
CSRF=$(decodeDataFromJson $DATA 'CSRFPreventionToken')

And that’s all there is to it! You can use the variables $TICKET and $CSRF in later requests. Keep in mind that a valid ticket is only valid for 2 hours, after that you’ll need to create a new one.


Proxmox 5.0 Beta 2 is Now Available

Category : Tech News

Get Social!

proxmox logo gradThe Proxmox VE team have released a new BETA version of their 5.0 branch to the public.

Proxmox 5.0 beta 1 was released recently, and now the team have released the 2nd beta fixing numerous bugs.

  • Based on Debian Stretch
  • Kernel 4.10
  • Updated qemu 2.9 that supports nested virtualization
  • New Ceph Luminous (12) which is the next LTS.

You can download the ISO from their download site and install it onto a new machine. You can also upgrade an existing Proxmox 4.x release to the new 5.0 release following these instructions.

It’s worth noting that the Proxmox team state that any BETA version of release 5.0 will be upgradeable to the full 5.0 when it’s available through apt.

As this is a BETA version, it’s not recommended in a production environment and should only be used for testing.

Quick Poll

Which type of virtualisation do you use?
  • Add your answer

Proxmox 4.2 is now available

Category : Tech News

Get Social!

proxmox logo gradA new version of Proxmox has been released building on the solid foundations laid down with the version 4 branch of Proxmox VE. The latest version 4.2 brings a new GUI, integration with Let’s Encrypt, updated packages and countless bug fixes.

You can download the ISO from https://www.proxmox.com/en/downloads/category/iso-images-pve

Highlights of the 4.2 release:

  • Based on Debian 8.4
  • New GUI using Sencha Ext JS 6
  • Let’s Encrypt support
  • Numerous LXC updates
    • Ability to add network rate limits
    • Add mount points via the GUI
    • Improved backups

Upgrade

Before updating, make sure all your VM’s have been stopped, both LXC and KVM. Ensure you have the required repository entries for apt-get. You’ll either need a valid license key or to add the less stable pve-no-subscription repository. See Proxmox 3.1 package/ updates manager (this also works for version 4.x) for more information.

Run the below commands on each server in your cluster.

apt-get update
apt-get dist-upgrade

Restart all Proxmox servers to complete the installation.


Proxmox 4.x bind mount – mount storage in an LXC container

Get Social!

An LXC containers storage is simple to set and maintain and is usually done through either a Web based GUI or a command line utility. It’s simple to set the size of disk allocated to an LXC container, and you can increase it easily, even while the container is still running.

Whilst simple to set up and administer, the standard storage options of LXC containers are limited. For example, you can’t mount an NFS share in an LXC container, or can you have multiple disks mounted as /dev block devices.

That’s where a bind mount comes in. You can add one or more mount points to your LXC container config that specifies a source path and a target path which is activated when the container starts. The source path would be a location on the host machine (the physical host running the LXC container – the Proxmox host in this example). The target is a location inside of the LXC container such as /mnt/myshare. This means that you can mount an NFS share, a GlusterFS share, several physical disks or anything else that can be mounted on your host and pass it through to your container.

Before you start, you’ll need to make sure both the host location and the target container location exist, otherwise the container will fail to start. You’ll then need to edit your LXC container config file. On Proxmox 4.x this can be found in /etc/pve/lxc/ and then the ID of your container. In this example the container we’re working on has an ID of 101.

vi /etc/pve/lxc/101.conf

Add the following row and substitute SOURCE with the path that you’d like to pass through to your container and TARGET to the path inside the container.

mp0: SOURCE, mp=TARGET

The below example will make /mnt/pve/nfs-share available in the container at /mnt/nfs.

mp0: /mnt/pve/nfs-share, mp=/mnt/nfs

Then restart your CT for the changes to take effect.

 

If you have multiple paths to mount then you can increment mp0 to mp1mp2, etc.

mp0: /mnt/pve/nfs-share, mp=/mnt/nfs
mp1: /mnt/pve/gluster-share, mp=/mnt/gluster
...

If you’re using version 3.x of Proxmox, or stand alone OpenVZ then see Proxmox bind mount for OpenVZ.


qcow2 Disk Images and Performance

Get Social!

qcow2 is a virtual disk image format developed by the guys who created QEMU and is one of the most versatile virtual disk formats available. It’s the default and preferred virtual disk format for the Proxmox VE hypervisor and should be used for most virtual machines.

qcow2 offers the following features :

  • Sparse space allocation which means that the entire virtual disk size doesn’t need to be allocated on the hard drive when it’s created. Only the physical space needed by actual data stored to the virtual disk is required.
  • Snapshots can be stored and rolled back to thanks to the copy-on-write process which is used to write to qcow2 files.
  • Linked or chained files can be used. For example, a read only base file could be used to hold ‘system’ files (a gold plate image, if you will), and any changes could be written to an additional file leaving the original intact and unchanged. Multiple machines could use this base file at once, therefore reducing space requirements.
  • AES encryption can be used to encrypt all data at rest.
  • Compression, based on zlib, to reduce physical space requirements and reduce read bytes.

Because of all these features, qcow2 files have a processing overhead, when compared to raw files, in that any data read or written to a qcow2 virtual disk would have to go through a process that could slow the read or write operations. This means there is an overhead associated with IO operations on qcow2 files, again, compared to raw type storage that we have to consider when deciding which features to use.

Increase qcow2 Performance

Sparse Space Allocation

Anything stored on a virtual disk has to be, at some point, stored on a physical medium such as a hard disk. In addition to the data, a virtual disk has a small amount of metadata associated with it that is usually stored in the same file. For example, a virtual disk has no physical constraint on how large it can be, like a hard disk, and therefore this is one of the bits of data we need to store in the qcow2 file.

In addition to that, and just like a physical hard drive, data in a qcow2 file is stored in blocks or clusters and a lookup is required to determine what data is in which cluster. Think of this as a shelf full of numbered boxes, and having a book (or index) which tells you what each box number contains. All of this cluster information is also stored within the qcow2 file consuming disk space that is relative to the data capacity of the qcow2 file. For example, a qcow2 file that can store 1GB of data would have a much smaller metadata footprint than a qcow2 file that can store 100GB of data.

virtual-disk-data-size

Anyway, back to sparse files. The idea of a sparse file is to remove the need to allocate the full size of the file to a physical disk. I can, for example, create a qcow2 image with a data capacity of 10GB that will take up just several KBs of physical space until data is saved to the qcow2 image. As data is saved to the qcow2 image, the physical space used by the image will increase (the data has to be stored somewhere, right?). In addition, as will the metadata because each new cluster that’s required by the qcow2 file will have it’s own entry in the metadata section of the file.

qemu-img comes with various options for setting the allocation when creating new disk images.

  • preallocation=metadata – allocates the space required by the metadata but doesn’t allocate any space for the data. This is the quickest to provision but the slowest for guest writes.
  • preallocation=falloc –  allocates space for the metadata and data but marks the blocks as unallocated. This will provision slower than metadata but quicker than full. Guest write performance will be much quicker than metadata and similar to full.
  • preallocation=full – allocates space for the metadata and data and will therefore consume all the physical space that you allocate (not sparse). All empty allocated space will be set as a zero. This is the slowest to provision and will give similar guest write performance to falloc.

Example command:

qemu-img create -f qcow2 -o preallocation=falloc image.qcow2 1G

The performance impact here is when the virtual image needs to grow in order to store new information written to it. For each new write a new cluster will need to be provisioned and a metadata index entry referencing the new cluster. Depending on the above option selected, the OS may have to allocate a new sector for both the index and the data cluster incurring a performance penalty. Once the disk has been expanded (e.g. or preallocation=full) then there is no penalty on assigning a new cluster as all the clusters are already assigned and available.

See qcow2 preallocation for some examples and benchmarks of the above attributes.

Encryption

qcow2 images are not encrypted by default, so not using encryption couldn’t be more simple. Of course, your data will not be encrypted (unless you use some other process on top of the virtual storage layer) but you’ll save all those CPU cycles when reading and writing the data.

Compression

qcow2 is, at best, a bit weird when it comes to compression (encryption works the same way, too!) in that compression is a one time event, or process that you run to compress an existing image. Any data written after this will be stored uncompressed.

The next thing is to understand compression itself – compression (under the right circumstances) will reduce the size of the data stored on disk at the expense of CPU to compress (one off) and decompress (every time the data is accessed) the data. In certain circumstances, compression can result in a quicker read for the process consuming the data, such as where CPU is abundant and IO bandwidth is very small.

As always, testing your scenarios is the best way to understand the impact.


Proxmox 4.0 Has Been Released

Category : Tech News

Get Social!

proxmox logo gradThe Proxmox VE team have released the latest version of Proxmox VE, Proxmox VE 4.0.

There are some pretty big changes in Proxmox version 4 which really warrant the new 4.0 version increment. See my other blog post on the initial 4.0 BETA 1 release notes for the bigger picture, or the below bullet points for the latest changes.

  • Improvements to the new container engine based on LXC, especially the integration into the storage model.
  • Migration path from OpenVZ to LXC
  • Linux Kernel 4.2
  • Ceph Server packages (0.94.x – hammer release)
  • Embedded NoVNC console
  • Improved IPv6 support
  • Redesigned HA
  • Countless bug fixes

You can download the ISO from their download site and install it onto a new machine.

Quick Poll

Which type of virtualisation do you use?
  • Add your answer

Visit our advertisers

Quick Poll

Do you use GlusterFS in your workplace?

Visit our advertisers