Linux User Management Cheat Sheet

Linux User Management Cheat Sheet

Get Social!

This is my cheat sheet on Linux user administration covering functions such as adding, and removing users and assigning them to groups.

Add a new Linux user

Use the useradd command to add a new user.

useradd [USERNAME]

Change a users password

The user account is locked until you set a password with the passwd command.

passwd [USERNAME]

Add user to group – new user

If you are adding a new user, you can add it to a group in the same command. This command will create a user of [USERNAME] in group [GROUPNAME].

useradd -G [GROUPNAME] [USERNAME]

Add user to group – existing user

If the user already exists, you can add it to an existing group with the usermod command.

usermod -a -G [GROUPNAME] [USERNAME]

Delete a user

Run the userdel command to remove an existing Linux user.

userdel [USERNAME]

View existing users and groups

Run the below cat command to view existing Linux users. You sill see the user names and user IDs of all users on your server.

cat /etc/passwd | sort

Use this cat command to view the existing Linux groups.

cat /etc/group | sort

Change a users home directory location

You can change the users home directory with the usermod command.

usermod -d -m [NEW_DIRECTORY] [USERNAME]

Change a users UID

Use the usermod -u command to change the user ID of a user.

usermod -u [UID] [USER_NAME]

 


Create a 3 Node Ceph Storage Cluster

Category : How-to

Get Social!

ceph-logoCeph is an open source storage platform which is designed for modern storage needs. Ceph is scalable to the exabyte level and designed to have no single points of failure making it ideal for applications which require highly available flexible storage.

The below diagram shows the layout of an example 3 node cluster with Ceph storage. Two network interfaces can be used to increase bandwidth and redundancy. This can help to maintain sufficient bandwidth for storage requirements without affecting client applications.

ceph-infrastructure

This example will create a 3 node Ceph cluster with no single point of failure to provide highly redundant storage. I will refer to three host names which are all resolvable via my LAN DNS server; ceph1ceph2 and ceph3 which are all on the jamescoyle.net domain. Each of these nodes has two disks configured; one which runs the Linux OS and one which is going to be used for the Ceph storage. The below output shows the storage available, which is exactly the same on each host. /dev/vda is the root partition containing the OS install and /dev/vdb is an untouched partition which will be used for Ceph.

root@ceph1:~# fdisk -l | grep /dev/vd
Disk /dev/vdb doesn't contain a valid partition table
Disk /dev/mapper/pve-root doesn't contain a valid partition table
Disk /dev/mapper/pve-swap doesn't contain a valid partition table
Disk /dev/mapper/pve-data doesn't contain a valid partition table
Disk /dev/vda: 21.5 GB, 21474836480 bytes
/dev/vda1   *        2048     1048575      523264   83  Linux
/dev/vda2         1048576    41943039    20447232   8e  Linux LVM
Disk /dev/vdb: 107.4 GB, 107374182400 bytes

Before getting started with setting up the Ceph cluster, you need to do some preparation work. Make sure the following prerequisites are met before continuing the tutorial.

You will need to perform the following steps on all nodes in your Ceph cluster. First you will add the Ceph repositories and download the key to make the latest Ceph packages available. Add the following line to a new /etc/apt/sources.list.d/ file.

vi /etc/apt/sources.list.d/ceph.list

Add the below entry, save and close the file.

deb http://ceph.com/debian-dumpling/ wheezy main

Download the key from Ceph’s git page and install it.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key add -

Update all local repository cache.

sudo apt-get update

Note: if you see the below code when running apt-get update then the above wget command has failed – it could be because the Ceph git URL has changed.

W: GPG error: http://ceph.com wheezy Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7EBFDD5D17ED316D

Run the following commands on just one of your Ceph nodes. I’ll use ceph1 for this example. Update your local package cache and install the ceph-deploy command.

apt-get install ceph-deploy -y

Create the first Ceph storage cluster and specify the first server in your cluster by either hostname or IP address for [SERVER].

ceph-deploy new [SERVER]

For example

ceph-deploy new ceph1.jamescoyle.net

Now deploy Ceph onto all nodes which will be used for the Ceph storage.  Replace the below [SERVER] tags with the host name or IP address of your Ceph cluster including the host you are running the command on. See this post if you get a key error here.

ceph-deploy install [SERVER1] [SERVER2] [SERVER3] [SERVER...]

For example

ceph-deploy install ceph1.jamescoyle.net ceph2.jamescoyle.net ceph3.jamescoyle.net

Install the Ceph monitor and accept the key warning as keys are generated. So that you don’t have a single point of failure, you will need at least 3 monitors. You must also have an uneven number of monitors – 3, 5, 7, etc. Again, you will need to replace the [SERVER] tags with your server names or IP addresses.

ceph-deploy mon create [SERVER1] [SERVER2] [SERVER3] [SERVER...]

Example

ceph-deploy mon create ceph1.jamescoyle.net ceph2.jamescoyle.net ceph3.jamescoyle.net

Now gather the keys of the deployed installation, just on your primary server.

ceph-deploy gatherkeys [SERVER1]

Example

ceph-deploy gatherkeys ceph1.jamescoyle.net

It’s now time to start adding storage to the Ceph cluster. The fdisk output at the top of this page shows that the disk I’m going to use for Ceph is /dev/vdb, which is the same for all the nodes in my cluster. Using Ceph terminology, we will create an OSD based on each disk in the cluster. We could have used a file system location instead of a whole disk but, for this example, we will use a whole disk. Use the below command, changing [SERVER] to the name of the Ceph server which houses the disk and [DISK] to the disk representation in /dev/.

ceph-deploy osd --zap-disk create [SERVER]:[DISK]

For example

ceph-deploy osd --zap-disk create ceph1.jamescoyle.net:vdb

If the command fails, it’s likely because you have partitions on your disk. Run the fdisk command on the disk and press d to delete the partitions and w to save the changes. For example:

fdisk /dev/vdb

Run the osd command for all nodes in your Ceph cluster

ceph-deploy osd --zap-disk create ceph2.jamescoyle.net:vdb
ceph-deploy osd --zap-disk create ceph3.jamescoyle.net:vdb

We now have to calculate the number of placement groups (PG) for our storage pool. A storage pool is a collection of OSDs, 3 in our case, which should each contain around 100 placement groups. Each placement group will hold your client data and map it to an OSD whilst providing redundancy to protect against disk failure.

To calculate your placement group count, multiply the amount of OSDs you have by 100 and divide it by the number of number of times each part of data is stored. The default is to store each part of data twice which means that if a disk fails, you won’t loose the data because it’s stored twice.

For our example,

3 OSDs * 100 = 300
Divided by replicas, 300 / 2 = 150

Now lets create the storage pool! Use the below command and substitute [NAME] with the name to give this storage pool and [PG] with the number we just calculated.

ceph osd pool create [NAME] [PG]

For example

ceph osd pool create datastore 150
pool 'datastore' created

You have now completed the set up for the Ceph storage pool. See my blog post on mounting Ceph storage on Proxmox.


Manually Mount a Ceph Storage Pool in Proxmox

Get Social!

ceph-logoThe latest BETA of Proxmox, and soon to be released 3.2 comes with the Ceph client automatically installed which makes mount Ceph storage pools painless.

You can mount the Ceph storage pool using the Proxmox web GUI.

You may need to copy the Ceph storage pool keyring from your Ceph server to your Proxmox server. If you use authentication on your Ceph storage pool, log in to your Ceph server via SSH. Run the below scp command to copy your Ceph keyring to your Proxmox server. You will need to replace [PROXMOX_SERVER] with the host name or IP address of your Proxmox server and [STORAGE_NAME] is the name you will use for your Proxmox storage mount which will be used in the web GUI.

scp /etc/ceph/ceph.client.admin.keyring [PROXMOX_SERVER]:/etc/pve/priv/ceph/[STORAGE_NAME].keyring

For example

scp /etc/ceph/ceph.client.admin.keyring prox3.jamescoyle.net:/etc/pve/priv/ceph/ceph_storage.keyring

To use the Proxmox GUI to mount a Ceph storage pool, login to the web GUI and click on the Datacenter folder in the top left, then click the Storage tab, Add and finally RBD.

proxmox-add-ceph-rbd

Enter the details of your Ceph storage pool in the new dialogue box which appears.

  • ID: the name which this storage mount will have in Proxmox. If you use authentication on your Ceph server, you will need to use the same name as the above  [STORAGE_NAME] field.
  • Pool: the Ceph storage pool name.
  • Monitor Host: the host or IP addresses of your Ceph monitor servers. You can enter a single host or multiple hosts separated by a semicolon [;].
  • User name: the named use to use when connecting to the Ceph storage pool. If you are not using user authentication, set this to the below value.
    admin (optional, default = admin)

proxmox-ceph-add-rbd

Don’t forget to make sure that the Proxmox host can reach the remote Ceph monitor hosts – by default a Ceph monitor runs on port 6789.

proxmox-ceph-storage-mount-rbd


Create SSH Key Authentication Between Nodes

Category : How-to

Get Social!

Secure key authentication is one of the more secure ways to grant users access to a Linux server. The standard password authentication which is usually used to login to a server is replaced with an SSH key which is presented when authenticating. This increases security as passwords can eventually be cracked using brute force or even guessed in some circumstances. SSH keys are impossible to guess and almost impossible to to hack using brute force due to their length and complexity.

A SSH key is actually two strings of characters – one which is private and is used to connect to the server and another which is public which sits on the server itself.

Run the below command to create the key pair on the client machine.

ssh-keygen -t rsa

Accept the default location to save the key which will be inside the current users home directory:

Enter file in which to save the key (/home/james/.ssh/id_rsa):

For additional security, you can add a passhrase to the private key. This means the key cannot be used without the passphrase which increases the security of the key itself. Simply press return if you do not wish to use a passphrase.

Note: if you are using the key for applications to gain access to other servers, it’s unlikely that a passphrase will be supported.  

Enter passphrase (empty for no passphrase):
Your identification has been saved in /home/james/.ssh/id_rsa
Your public key has been saved in /home/james/.ssh/id_rsa.pub. 
The key fingerprint is:
46:ba:02:fd:2f:9c:b9:39:ec:6c:90:50:d8:ec:7b:00 james@testpc
The key's randomart image is:
+--[ RSA 2048]----+
|   +             |
|  E +            |
|   +    .        |
|  ..o  o         |
|  ...+. S        |
|   .+..o         |
|    .=oo         |
|     oOo         |
|     o=+.        |
+-----------------+

The two keys have been created;

  • Private: /home/james/.ssh/id_rsa
  • Public: /home/james/.ssh/id_rsa.pub

The final step is to copy the public key to the machine which you are going to connect to. In Debian or Ubuntu you can use the ssh-copy-id – you will need to change [USER] for the user who you will connect to the remote machine as and [SERVER] to the hostname or IP address of the remote server you will connect to.

ssh-copy-id [USER]@[SERVER]

Not all Linux distributions will contain the required ssh-copy-id utility, many CentOS/ Red Hat distributions do not for example, so you will need to use the manual method. Again, you will need to substitute the [USER] and [SERVER] attributes to the details of your remote machine.

cat /home/james/.ssh/id_rsa.pub | ssh [USER]@[SERVER] "cat >> ~/.ssh/authorized_keys"

It is not always recommended for security reasons but you can copy this public key to multiple machines so that you can use the same private key to connect to multiple remote machines.


Export MySQL Database into Separate Files per Table

Category : How-to

Get Social!

mysql-logoI have recently been using git to check in an applications database. The database has many tables, some of which are populated with test data and created a fairly large file when exported. I noticed a few issues issues when checking these into git, namely that the large file was uploaded and saved in git as a single large file containing my changes and the other stuff which had not changed.

Instead of using this large file as one and checking it into git, breaking the file into several smaller files means that only the table which changed would be added to the git commit resulting in much smaller uploads.

The below code is a bash script which let’s you export, using mysqldump, all tables in a MySQL database to individual files. This will result in one file per MySQL table in the database. You will need to modify the following attributes:

  • [USER] – the username to use when connecting to the MySQL instance.
  • [PASSWORD] – the password for the above MySQL user.
  • [DATABASE] – the name of the MySQL database to export.
  • [BACKUP_LOCATION] – the location on the MySQL server where the SQL files will be created.
#!/bin/bash
GIT_MYSQL=/[BACKUP_LOCATION]
for T in `mysql -u [USER] -p[PASSWORD] -N -B -e 'show tables from [DATABASE]'`;
do
    echo "Backing up $T"
    mysqldump --skip-comments --compact -u [USER] -p[PASSWORD] [DATABASE] $T > $GIT_MYSQL/$T.sql
done;

Test Internet Speed from the Command Line

Get Social!

speedtest-netMatt Martez has kindly created and shared a Python script for interacting with speedtest.net for testing bandwidth to the internet from the command line. This is a great script which only has the dependency of Python – something which is available on most Linux distributions.

Grab the script with wget to the server you wish to test. Remember, this is to test the bandwidth to servers which are publically available on the internet – this does not test your internal server to server communication. See Iperf for details on testing LAN bandwidth.

wget -O speedtest-cli https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py

Add the execute permission:

chmod +x speedtest-cli

And that’s all you need to start your first speed test! Run the speedtest-cli with the –simple switch to start your first basic speed test. The script will automatically choose the best server based on your location.

./speedtest-cli --simple
Ping: 10.331 ms
Download: 6.47 Mbit/s
Upload: 5.30 Mbit/s

You can use the –help switch to see all the options you have with the tool.

./speedtest-cli --help
usage: speedtest-cli [-h] [--share] [--simple] [--list] [--server SERVER]
                     [--mini MINI] [--source SOURCE] [--version]

Command line interface for testing internet bandwidth using speedtest.net.
--------------------------------------------------------------------------
https://github.com/sivel/speedtest-cli

optional arguments:
  -h, --help       show this help message and exit
  --share          Generate and provide a URL to the speedtest.net share
                   results image
  --simple         Suppress verbose output, only show basic information
  --list           Display a list of speedtest.net servers sorted by distance
  --server SERVER  Specify a server ID to test against
  --mini MINI      URL of the Speedtest Mini server
  --source SOURCE  Source IP address to bind to
  --version        Show the version number and exit

Visit our advertisers

Quick Poll

Do you use GlusterFS in your workplace?

Visit our advertisers