rclone Systemd startup mount script

rclone Systemd startup mount script

Get Social!
rclone

Rclone is a command line utility used for reading and writing to almost any type of cloud or remote storage. From Google Drive to Ceph, rclone supports almost any cloud-based remote storage platform you can think of. You can perform upload, download or synchronisation operations between local storage and remote cloud storage, or between remote storage directly.

In addition to this, rclone has an experimental mount feature that lets a user mount a remote cloud storage provider, such as s3 or Google Drive, as a local filesystem. You can then use the mounted filesystem as if it were a local device, albeit with some performance considerations.

Before we get going, make sure you have rclone installed on your system and configured with a remote. 

curl https://rclone.org/install.sh | sudo bash
rclone config 

Once you have a remote defined, it’s time to create the mountpoint and systemd script. I’ll be using Google Drive for this example, but the mount command works for any supported remote.

Create the mount point directory to use for the remote storage:

mkdir /mnt/google-drive

Next, create the below systemd script and edit it as required:

vi /etc/systemd/system/rclone.service
# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/mnt/google-drive
After=plexdrive.service

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount \
        --config=/root/.config/rclone/rclone.conf \
        --allow-other \
        --cache-tmp-upload-path=/tmp/rclone/upload \
        --cache-chunk-path=/tmp/rclone/chunks \
        --cache-workers=8 \
        --cache-writes \
        --cache-dir=/tmp/rclone/vfs \
        --cache-db-path=/tmp/rclone/db \
        --no-modtime \
        --drive-use-trash \
        --stats=0 \
        --checkers=16 \
        --bwlimit=40M \
        --dir-cache-time=60m \
        --cache-info-age=60m gdrive:/ /mnt/google-drive
ExecStop=/bin/fusermount -u /mnt/google-drive
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

The important parts are detailed below, however, there are various other options are detailed on the rclone mount documentation page.

  • –config – the path to the config file created by rclone config. This is usually located in the users home directory.
  • gdrive:/ /mnt/google-drive – details two things; firstly the config name created in rclone config, and secondly the mount point on the local filesystem to use.

Once all this is in place you’ll need to start the service and enable the service at system startup (if required)

systemctl start rclone
systemctl enable rclone

GlusterFS Error cannot open /dev/fuse

Category : How-to

Get Social!

After installing glusterfs-client on my Debian server I received the below error when trying to mount a remote GlusterFS volume. The error indicates that the device at /dev/fuse cannot be found, however ls showed that it was available.

This was the error displayed in the Gluster log after running the mount command:

[2016-04-12 17:39:58.948364] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:39:59.030349] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:39:59.030385] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2016-04-12 17:43:29.644266] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:43:29.661947] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:43:29.662014] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

A quick check of the kernel fuse module using modprobe gave an error:

modprobe fuse
ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg)

And some Googleing indicated that it’s because fuse-utils was missing. In my case it wasn’t.

apt-get install fuse-utils

Further investigation showed that the kernel had recently been updated, but the machine hadn’t been restarted so the latest installed kernel wasn’t the kernel that was running. There seemed to be some kind of mismatch between the loaded kernel and the fuse library.

A reboot of the machine fixed the issue – the fuse module loaded correctly and the Gluster mount executed without error.

reboot

GlusterFS Mount failed. Please check the log file for more details.

Category : How-to

Get Social!

gluster-orange-antYou may get the following error when trying to mount a GlusterFS volume locally. The error displayed gives no indication why the volume failed to mount, but it does hint at where you can get more information about the error.

This is the error presented when running the mount command:

Mount failed. Please check the log file for more details.

The log file could be in numerous places, depending on your Linux distribution and Gluster settings, however generally it will be in /var/log/glusterfs.

Take a look at the log file for further information on why the volume cannot be mounted. An example is included below, showing an issue with the fuse kernel module.

vi /var/log/glusterfs/mnt-data-volume.log
[2016-04-12 17:39:58.948364] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:39:59.030349] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:39:59.030385] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2016-04-12 17:43:29.644266] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.10 (args: /usr/sbin/glusterfs --volfile-server=glustercluster1 --volfile-id=/data-volume /mnt/data-volume)
[2016-04-12 17:43:29.661947] E [mount.c:341:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such device)
[2016-04-12 17:43:29.662014] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

Your issue could vary, and as such we can’t cover every eventuality here. At least you now know how to get more details around your specific issue.


Persistent Ceph Mount Point

Tags :

Category : How-to

Get Social!

ceph-logoOnce you’ve got a Ceph cluster up and running you’re going to want to mount it somewhere. This guide assumes that the mount point will be on a machine that isn’t running Ceph, however if you’re mounting the storage on one of the Ceph server nodes then you can skip the package installation steps.

Install the Ceph Client

Before we start mounting anything, we’re going to need the required software installed. Assuming you’re on Debian run the below commands to add the key and the software repository for the Ceph binaries.

wget --no-check-certificate -q -O- 'https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

Then run the apt-get commands to update your software index and install the Ceph binaries for the client.

apt-get update && apt-get install -y ceph-fs-common

Mount a Ceph device as a folder

Here we’re going to use /mnt/ha-pool as the mount point but you can change that to whatever you’d like. Run this command on any machine that you’d like to mount the Ceph volume on.

mkdir /mnt/ha-pool

Then we need to export the key so that the ceph-client can authenticate with the Ceph daemon. You could turn authentication off, or even create a non-admin user secret but for this tutorial we’ll just use the admin user. Run this command on your admin machine for your Ceph cluster (NOT on the client you’re setting up the mount point).

ceph-authtool --name client.admin /etc/ceph/ceph.client.admin.keyring --print-key

You’ll be presented with a string of letters and numbers. Copy this and add it to a file stored on your Ceph client machine. This is the ‘password’ or secret that the Ceph client will use to authenticate with the Ceph server. Paste the string into a file – you can store this anywhere but we’ll use /etc/ceph/admin.secret.

mkdir /etc/ceph/ 
vi /etc/ceph/admin.secret

Automatic mount

If you’d like the Ceph mount point to persist across client machine reboots then you’ll need to add an entry to /etc/fstab. Run the below command to add an entry to your fstab file so that the Ceph volume will be automatically mounted on machine start. This will mount the Ceph volume at /mnt/ha-pool and is referencing the Ceph monitor server nodes ceph1, ceph2 and ceph3 – make sure you change these values for your environment. You don’t have to specify more than one Ceph monitor server node, but it makes sense, just incase one of your nodes fails.

echo "cehp1,ceph2,ceph3:/ /mnt/ha-pool/ ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime 0 2" >> /etc/fstab

Then to mount the volume, run the below mount command

mount /mnt/ha-pool

Manually mount filesystem

If you don’t need the mount to persist you can simply use the mount command. The parameters are very similar to the above section, with the Ceph monitor servers, secret file and mount point all specified. This will mount the Ceph volume at /mnt/ha-pool and is referencing the Ceph monitor server nodes ceph1, ceph2 and ceph3 – make sure you change these values for your environment.

mount -t ceph ceph1,ceph2,ceph3:/ /mnt/ha-pool -o name=admin,secretfile=/etc/ceph/admin.secret

Ceph mount ports and additional options

By default, and if left unspecified like the above examples, the Ceph client will use 6789 for your monitor server daemon. If you’ve specified a different port for your monitor daemon then you can specify them in the mount command. The same syntax can be used in your fstab.

mount -t ceph ceph1:1234,ceph2:4567,ceph3:8910/ /mnt/ha-pool -o name=admin,secretfile=/etc/ceph/admin.secret

You can also specify your secret key directly, rather than a file that contains it. I won’t go into the security implications of this here, but I’m sure you can imagine one or two. Again, the same syntax can be used in your fstab.

mount -t ceph ceph1,ceph2,ceph3:/ /mnt/ha-pool -o name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==

Small Scale Ceph Replicated Storage

Category : How-to

Get Social!

I’ve written a few posts about Ceph, how it works and how it’s set up and it mostly revolves around large scale storage for storing things like virtual machines. This post will focus on using Ceph  provide fault tolerant storage for a small amount of data in a low resource environment. Because of this, the main focus has been moved away from performance and switched to:

  • availability – the storage should always be available and recoverable in the event of disaster
  • portability – the storage isn’t tied to a machine and can be moved with relative ease.
  • scalability – more machines can use the storage as required.

This tutorial will focus on a small scale Ceph setup, fit for something like a Raspberry Pi or low resource VPS. We’ll use 3 machines but you could easily add more machines if your scenario requires it.

If you are looking for a larger setup, then see this blog post on installing Ceph.

ceph-local

The above diagram shows the topology of the layout. Each machine will have a file /ceph-file that will be mounted as a block device on /dev/loop0 and that’s the space that will be assigned to Ceph. Ceph will replicate any data stored to the file and ensure the data is available to all Ceph clients. The Ceph storage will be accessed from a mountpoint at /mnt/ha-pool.

Ceph block device

The first step in creating a Ceph storage pool is to set aside some storage that can be used by Ceph. Ceph stores everything twice, by default, so whatever storage you provision will be halved. For this example we’re going to use a file created with dd as the Ceph storage device, however you could use a drive mounted in /dev/ if you have one. A whole drive is by far the preferred solution, however as I’ve stated, the main goal of this post isn’t just performance.

If you’re going to use a file for storage, follow my post on creating a block device from a file and mount it on loop0. Otherwise you can continue to the next step.

OpenVZ: if you’re using Ceph inside of an OpenVZ container, make sure you pass the loop device through to the container.

Installing Ceph

At this point it’s worth noting that Ceph, in addition to the application requirements, will use approximately 1MB of RAM for each GB of storage provisioned. This means that 1TB of provisioned storage (which in today’s world is rather small) would take 1GB of RAM plus the requirements of running the Ceph daemons. For our low memory footprint, only provision the storage that you’ll need.

Before starting the install, you’ll need a couple of things in place:

  • SSH Keys are set up between all nodes in your cluster – see this post for information on how to set up SSH Keys. For security it’s good practice to set up a new user on all machines you’re going to install Ceph onto and use it to run Ceph. The key should also be copied to all machines using the ssh-copy-id command.
  • NTP is set up on all nodes in your cluster to keep the time in sync. You can install it with: apt-get install ntp

The following commands are for installing Ceph on Debian (wheezy) and should be executed on all machines that need to run Ceph. In our example, these commands will be executed on Server 1Server 2 and Server 3.

First let’s add the release key and repositories to the apt package manager. Run the following as root:

wget --no-check-certificate -q -O- 'https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -
echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list

Next let’s update our apt cache and install Ceph and a few other bits.

apt-get update && apt-get install ceph-deploy ceph ceph-common

Setup and configuring for minimal resource requirements

The next step should be done on just one of your Ceph machines. This will create the monitor service and make each machine aware of the other machines running Ceph.

The command references each machine you’re going to be running Ceph on by hostname or DNS entry. Before running the command, make sure that all of your machines resolve via DNS or hosts file. Because I’m only running this in a lab, I’ve used the hosts file route and added an entry to each machine in the hosts file of all Ceph machines.

vi /etc/hosts

Add your Ceph machine IP and hostnames.

10.10.10.1 ceph1
10.10.10.2 ceph2
10.10.10.3 ceph3

You can test that each machine can see the others by using the ping command. If it works then you should be in business!

ping ceph2
ping ceph3

Once you’re happy that all machines can reference the other machines then run the ceph-deploy command:

ceph-deploy new ceph1 ceph2 ceph3

If you haven’t used your ssh keys since setting them up you may be presented with the following warning. Just type yes to continue.

The authenticity of host 'ceph1 (10.10.10.1)' can't be established.
ECDSA key fingerprint is 66:44:a8:90:e2:8e:12:0e:05:4a:c4:93:a1:43:d1:fd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph1' (ECDSA) to the list of known hosts.

We now need to configure Ceph with our low resource settings. These settings are not performance driven, but instead set to minimise system resources.

See ceph.conf for the script and add the content to the ceph.conf file

vi ~/ceph.conf

Create the initial mds daemons, monitor daemons and set the proper permissions on the keyring file.

ceph-deploy mon create-initial
ceph-deploy admin ceph1 ceph2 ceph3
ceph-deploy mds create ceph1 ceph2 ceph3

ssh ceph1 "chmod 644 /etc/ceph/ceph.client.admin.keyring"
ssh ceph2 "chmod 644 /etc/ceph/ceph.client.admin.keyring"
ssh ceph3 "chmod 644 /etc/ceph/ceph.client.admin.keyring"

Test Ceph is deployed and monitors are running

At this point it’s good to take a step back and check everything is up and running. We’ve still not assigned any storage to our Ceph cluster so we can’t run it yet, but we should have the monitor daemons running and the cluster configuration be deployed on all servers.

Run the below command and take a look at the output.

ceph -s

The output should show

cluster 51e1ddff-ff28-4f58-af7e-e94448e5324b
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 3 mons at {ceph1=10.10.10.1:6789/0,ceph2=10.10.10.2:6789/0,ceph3=10.10.10.3:6789/0}, election epoch 6, quorum 0,1,2 ceph1,ceph2,ceph3
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e8: 1/1/1 up {0=web1=up:active}, 2 up:standby

As you can see, three Ceph servers are referenced on port 6789 which is the monitor daemon port number.

Add storage to the Ceph cluster

We’ve got our Ceph cluster, and we’ve got our storage device that we created as the first step, it’s time to put the two together. Run the below commands on the same machine that you ran the above steps on. You’ll need to replace /dev/sda with the block device on each ceph machines that you’d  like to use. Note that the block device (sda) does not need to be the same on all machines.

ceph-deploy osd create --fs-type ext4 ceph1:/dev/sda
ceph-deploy osd create --fs-type ext4 ceph2:/dev/sda
ceph-deploy osd create --fs-type ext4 ceph3:/dev/sda

Or…

You can use a directory as storage for Ceph, rather than a block device.

If you’re following this tutorial and creating a loop device to use with Ceph then you’ll need to ensure there is a filesystem on the loop0 device and that it’s mounted. You can skip these next step if you are just using an existing directory.

Run the below commands (if you’re using a loop device) on each of the machines that has a loop device you’d like to use. We’re assuming that you’re loop device is loop0. For this example we’ll run it on each of the three machines; ceph1, ceph2 and ceph3.

mkfs.ext4 /dev/loop0
mkdir /mnt/ceph-backing0
echo "/dev/loop0 /mnt/ceph-backing0 ext4 defaults 1 1" >> /etc/fstab
mount /mnt/ceph-backing0

You can use a directory path on the Ceph machine as the OSD device. This may be an option if you’re in an OpenVZ or Docker container that doesn’t allow you to pass through block devices.

ceph-deploy osd prepare ceph1:/mnt/ceph-backing0
ceph-deploy osd prepare ceph2:/mnt/ceph-backing0
ceph-deploy osd prepare ceph3:/mnt/ceph-backing0

And then activate the storage:

ceph-deploy osd activate ceph1:/mnt/ceph-backing0
ceph-deploy osd activate ceph2:/mnt/ceph-backing0
ceph-deploy osd activate ceph3:/mnt/ceph-backing0

Mount a Ceph device as a folder

That’s the server side done! The last step to using our Ceph storage cluster is to mount the cluster to a mountpoint on the local filesystem. Here we’re going to use /mnt/ha-pool as the mount point but you can change that to whatever you’d like. Run these commands on any machines that you’d like to mount the Ceph volume on.

First create the mount point where the Ceph storage will be accessible from.

mkdir /mnt/ha-pool

Then we need to export the key so that the ceph-client can authenticate with the Ceph daemon. You could turn authentication off, or even create a non-admin user secret but for this tutorial we’ll just use the admin user.

ceph-authtool --name client.admin /etc/ceph/ceph.client.admin.keyring --print-key >> /etc/ceph/admin.secret

Then run the below command to add an entry to your fstab file so that the Ceph volume will be automatically mounted on machine start. This will mount the Ceph volume at /mnt/ha-pool.

echo "ceph1,ceph2,ceph3:/ /mnt/ha-pool/ ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime 0 2" >> /etc/fstab

Finally run the mount command

mount /mnt/ha-pool

One last check to make sure you’re up and running:

df -h | grep ha-pool
10.10.10.1,10.10.10.2,10.10.10.3:/                    6G   3G   3G  54% /mnt/ha-pool

And that’s it! You have a working Ceph cluster up and running!


Use A File As A Linux Block Device

Get Social!

Just like when creating a SWAP file, you can create a file on a disk and present it as a block device. The block device would have a maximum file size of the backing file, and (as long as it’s not in use) be moved around like a normal file. For example, I could create a 1GB file on the filesystem and make Linux treat the file as a disk mounted in /dev/. And guess what – that’s what we’re going to do.

Create a file and filesystem to use as a block device

First off, use dd to create a 1GB file on an existing disk that we’ll use for our storage device:

dd if=/dev/zero of=/root/diskimage bs=1M count=1024

Then ‘format’ the file to give it the structure of a filesystem. For this example we’re going to use ext4 but you could choose any filesystem that meets your needs.

mkfs.ext4 /root/diskimage

You’ll be promoted with Proceed anyway?. Type y and press return to proceed with the process.

mke2fs 1.42.5 (29-Jul-2012)
/root/diskimage is not a block special device.

Proceed anyway? (y,n) y

Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Mounting a loop device

Before mounting the file we need to check that there is a free /dev/loopX loopback device that we can use to represent our new block device.

Run the below command, and if there is any output then check if it’s one of your loop devices, which will more than likely reference /dev/loop as the mounted device. If you do have a reference to our loop device then see the below section on Unmounting a loop device, or choose a number higher than the highest listed loop device, for example: usually there are several loop devices, starting with loop0 and going up in value to loop1loop2, and so on.

cat /proc/mounts | grep /dev/loop

Once you have the file that you’d like to mount and a free loop device then you can go ahead and mount the file as a block device. You have two options:

  1. Mount the file as a block device only
  2. Mount the file as a block device and mount the filesystem of it on a local mount point (eg. /mnt/mymountpoint).

For option 1; to only mount the file as a device in /dev/, run the below command and change /root/diskimage to the path of the file you’d like to mount. loop0 can also be incremented as explained above.

losetup /dev/loop0 /root/diskimage

If you’d like this to be remounted after a machine reboot then add the above line to the rc.local file.

vi /etc/rc.local

And add:

losetup /dev/loop0 /root/diskimage

 

For option 2; to mount the file and the filesystem on it, use the mount command. You must have already created the mount point locally before running the command, as you would when mounting a disk or NFS share.

mkdir /mnt/mymountpoint

Then run the mount command and specify the loop device, the path of the file and the path to mount the filesystem on:

mount -o loop=/dev/loop0 /root/diskimage /mnt/mymountpoint

To check the file has been mounted you can use the df command:

df -h | grep mymountpoint
/dev/loop0  976M  1.3M  924M  1% /mnt/mymountpoint

Unmounting a loop device

If you’ve mounted the filesystem on the block device using the mount command then make sure it’s unmounted before proceeding.

umount /mnt/mymountpoint

To then free the loop0 device (or which ever loop device you’ve used) you’ll need the losetup command with the d switch.

losetup -d /dev/loop0

 


Visit our advertisers

Quick Poll

How often do you change the password for the computer(s) you use?

Visit our advertisers