Author Archives: James Coyle

Share GlusterFS volume to a single IP address

Category : How-to

Get Social!

gluster-orange-antWhen you create a new GlusterFS Volume it is publicly available for any server on the network to read.

File servers do not generally have firewalls as they are hosted in a secure zone of a private network. Just because it’s secure doesn’t mean you should leave it wide open for anyone with access to connect to.

Using the auth.allow and auth.reject arguments in GlusterFS we can choose which IP addresses can access the volume. Access is provided at volume level, therefore you will need to alter access permissions on every new volume you create.

Run the below command on each server changing [VOLUME] to match the volume to be accessed and [IP ADDRESS] to be an IP address of the server which can connect to the current server.

gluster volume set [VOLUME] auth.allow [IP ADDRESS]

[IP ADDRESS] does not have to be a single IP address. You can also use an asterisk [*] as a wildcard, or multiple addresses separated by a comma [,]. The below example allows only servers with an IP address on the 10.1.1.x range, and 10.5.5.1 to access volume datastore.. All other servers will be denied access to the volume.

gluster volume set datastore auth.allow 10.1.1.*,10.5.5.1

Setup Glusterfs with a replicated volume over 2 nodes

Category : How-to

Get Social!

gluster-orange-antThis post will show you how to install GlusterFS in Ubuntu/ Debian however the steps will be similar with Red Hat based linux operating systems with minor changes to the commands.

Gluster File System is a distributed files system allowing you to create a single volume of storage which spans multiple disks, multiple machines and even multiple data centres.

Before we get started, install the required packages using apt-get. With Red Hat/ Cent based operating systems you will need to use yum, or download the package directly from http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/

apt-get install glusterfs-server

Perform this on both of your servers. If you have more than two servers, perform this command on all of the servers required for the volume.

You will now need each of these servers to know about the others. Run gluster peer probe and the ip address of all the other servers in your GlusterFS cluster.

gluster peer probe gfs2.jamescoyle.net

Each of the commands should return with Probe successful which means the node is now known to this machine. You will only need to do this on one node of your cluster.

Run gluster peer status to check each node in your cluster is aware of the other nodes:

gluster peer status

The result should look like:

Number of Peers: 1

Hostname: gfs2.jamescoyle.net
Uuid: a0977ca2-6e47-4c1a-822b-99df896080ee State: Peer in Cluster (Connected)

Now we need to create the volume where the data will reside. the volume will be called datastore. First of all, we need to identify where on the host this storage is. For this example, it is /mnt/gfs_block on both nodes, but this could be any mount point of storage that you have. If the folder does not exist, it will be silently created so be sure to get the correct path on all nodes.

gluster volume create datastore replica 2 transport tcp gfs1.jamescoyle.net:/mnt/gfs_block gfs2.jamescoyle.net:/mnt/gfs_block

If this has been sucessful, you should see:

Creation of volume testvol has been successful. Please start the volume to access data.

As the message indicates, we now need to start the volume:

gluster volume start datastore

And wait for the message that is has started.

Starting volume testvol has been successful

Running either of the below commands should indicate that GlusterFS is up and running. The ps command should show the command running with both servers in the argument. netstat should show a connection between both nodes.

ps aux | grep gluster
netstat -tap | grep glusterfsd

As a final test, to make sure the volume is available, run gluster volume info. An example output is below:

gluster volume info

Volume Name: datastore
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gfs1.jamescoyle.net:/mnt/datastore
Brick2: gfs2.jamescoyle.net:/mnt/datastore

That’s it! You now have a GlusterFS volume which will maintain replication across two nodes. To see how to use your volume, see our guide to mounting a volume.


Mount a GlusterFS volume

Get Social!

gluster-orange-antGlusterFS is an open source distributed file system which provides easy replication over multiple storage nodes. These nodes are then combined into storage volumes which you can easily mount using fstab in Ubuntu/ Debian and Red Hat/ CentOS. To see how to set up a GlusterFS volume, see this blog post.

Before we can mount the volume, we need to install the GlusterFS client. In Ubuntu we can simply apt-get the required package, or yum in Red Hat/ CentOS. For Ubuntu/ Debian:

apt-get install glusterfs-client

For Red Hat, OEL and CentOS:

yum install glusterfs-client

Once the install is complete, open the fstab and add a new line pointing to your server. The server used here is the server which contains the information on where to get the volume, and not necessarily where the data is. The client will connect to the server holding the data. The following steps are the same on both Debian and Red Hat based Linux distributions.

Easy way to mount

vi /etc/fstab

Replace [HOST] with your GlusterFS server, [VOLNAME] with the Gluster FS volume to mount and [MOUNT] with the location to mount the storage to.

[HOST]:/[VOLUME] /[MOUNT] glusterfs defaults,_netdev 0 0

Example:

gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev 0 0

Finally, reboot your machine to make the volume appear in df.

df -h
gfs1.jamescoyle.net:/testvol   30G  1.2G   27G   5% /mnt/volume

More redundant mount

The trouble with the above method is that there is a single point of failure. The client only has one GlusterFS server to connect to. To set up a more advanced mount, we have two options; create a volume config file, or use backupvolfile-server in the fstab mount. Remember this is not to specify where all the distributed volumes are, it’s to specify a server to query all the volume bricks.

fstab method

We can use the parameter backupvolfile-server to point to our secondary server. The below example indicates how this could be used.

gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev,backupvolfile-server=gfs2.jamescoyle.net 0 0

Using a volume config file

Create a volume config file for your GlusterFS client.

vi /etc/glusterfs/datastore.vol

Create the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host [HOST1]
  option remote-subvolume [VOLNAME]
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host [HOST2]
  option remote-subvolume [VOLNAME]
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Example:

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host gfs1.jamescoyle.net
  option remote-subvolume /mnt/datastore
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host gfs2.jamescoyle.net
  option remote-subvolume /mnt/datastore
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Finally, edit fstab to add this config file and it’s mount point. Replace [MOUNT] with the location to mount the storage to.

/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

Proxmox 3.1 package/ updates manager

Get Social!

proxmox logo gradProxmox version 3.1 made a huge change to the way updates are managed. Prior to version 3.1, updates for the PVE software were managed from two main repositories – test and production – both of which were free and available. Generally the production repositories were used to ensure a stable and secure Proxmox host. These two repositories have now changed to the following:

Please note, this also works for version 4.x.

None subscription repository

This repository remains free, and all the code is still under AGPL v3. I believe this repository is similar to what the test repository used to be – containing test and potentially buggy packages. There is also the standard Debian repository for all other, non-Proxmox, updates. As these packages are potentially unstable, it is not recommended to use this repository for a production server.

Proxmox Version 3.x

Proxmox version 3.x is based on Debian Wheezy.

deb http://ftp.uk.debian.org/debian wheezy main contrib
deb http://download.proxmox.com/debian wheezy pve-no-subscription
Proxmox Version 4.x

If you run version 4.x of Proxmox, based on Debian Jessie then you’ll need to adjust the URL slightly as below:

deb http://ftp.uk.debian.org/debian jessie main contrib
deb http://download.proxmox.com/debian jessie pve-no-subscription

Enterprise repository

This repository requires a subscription license key to be added to Proxmox before you can access it. These packages are stable and are recommended for any production Proxmox server after version 3.1.

If you do not have a valid subscription, you will need to remove this repository otherwise apt-get will never complete it’s update. Without a valid subscription, you will see errors such as “You do not have a valid subscription for this server. Please visit www.proxmox.com to get a list of available options.

Proxmox valid subscription error

Proxmox valid subscription error teminal

To remove the repository, open the source list file in the apt-get sources folder

vi /etc/apt/sources.list.d/pve-enterprise.list

And comment out the line with a # as below.

#deb https://enterprise.proxmox.com/debian wheezy pve-enterprise

All updates will now come from the none subscription repository which are publicly available. Unfortunately, each time you click the Refresh button on the Updates tab you will receive the above message. You will also receive this message when you login to the web GUI.

What about the old updates repository?

The original updates repository should be removed once you have upgraded to Proxmox 3.1. No future updates will be available in this repository.

Edit the sources list and comment out the old repository:

vi /etc/apt/sources.list

If you used the test repository, you will need to comment that out too.

# PVE packages provided by proxmox.com
#deb http://download.proxmox.com/debian wheezy pve

How to remove the “No Valid Subscription” message

Each time you log into Proxmox 3.1 a dialogue box pops up reminding you that you have not purchased a valid subscription. One way to remove the message is to purchase a subscription from the Proxmox team. Another method is to make a slight change to the code to remove the dialogue box from appearing.

See my post on how to change the code to remove the dialogue box.


Proxmox 3.1 is now available!

Get Social!

proxmox logo gradA new release of Proxmox is now available, release 3.1.

The highlights of this new release are:

  • A new storage plugin for GlusterFS.

Proxmox storage typesThis is a new storage plugin which can be used to add usable storage to your Proxmox host. GlusterFS is an open source, distributed file system with potential to house a huge capacity of data.

GlusterFS is key to open source, scalable and highly available storage spanning many servers, and even data centres.

See their About page for more information: http://www.gluster.org/about/

  • The SPICE protocol has been implemented as a technology preview. Whilts it is not recommended for production systems, you can see what’s in store in the Proxmox roadmap.
  • One of the most dramatic changes of Proxmox 3.1 is that the package repositories have been split into subscription and none-subscription. The subscription repository is the only repository recommended for a production server, however it requires a paid subscription with Proxmox to access it. See this thread for more information on the new Updates GUI page.

Proxmox updates manager

You can update any 3.0 install of Proxmox to the latest 3.1. Before updating, make sure all your VM’s have been stopped. Run the below commands on each server in your cluster.

apt-get update
apt-get dist-upgrade

Restart all Proxmox servers to complete the update.

 


Create a permanent virtual IP address in Linux

Category : How-to

Get Social!

In a previous post, we saw how to create a virtual IP address, based on an existing network interface. The trouble with this method is that the virtual IP address will vanish when you reboot your machine.

On option is to attach the script to the network up and down scripts however there is a much easier way!

In Debian/ Ubuntu you simply create a new, virtual interface in the interfaces file.

vi /etc/network/interfaces

And add a network interface, based on an existing interface. The below example is a virtual IP based on eth0 – note if this is your second virtual IP you would use eth0:2, and so on.

auto eth0:1
iface eth0:1 inet static
address 192.168.100.9
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255

You will need to change the IP addresses to match your network. Notice there is no gateway – usually you can only have one gateway per machine.

Restart networking for the changes to take effect.


Visit our advertisers

Quick Poll

Do you use GlusterFS in your workplace?

Visit our advertisers