Author Archives: James Coyle

Using Dockerfiles to build new Docker images

Category : How-to

Get Social!

docker-logoIn a previous blog post I detailed the steps involved in creating a new Docker container, making some changes and saving the image back to the local repository. The process described works well but it’s a very manual affair which Docker has a solution for.

Docker has a scripting language which can be used to create a new instance with a predefined list of commands and properties which will be used to create your new Docker instance.You could, for example, use a docker file to install Apache, configure the firewall and any further configurations we may need to make.

The benefits to using a Dockerfile, rather than making all the changes directly and saving the image are that the underlying OS and the additions that you wish to make are completely independent. You can, for example, run a Dockerfile on any OS image. Using the example that follows, you could run the Dockerfile on either a Debian or Ubuntu OS without changing a thing.

Create a directory to hold your DockerFile project, which we’ll call apache2 for this example.  I’ll be placing all my DockerFiles in their own project directory under dockerfiles in my home directory.

mkdir -p /home/james/dockerfiles/apache2

Open a text file named Dockerfile in your favourite text editor in the project folder we just created. This is the standard file structure that Docker expects when creating DockerFiles.

vi /home/james/dockerfiles/apache2/Dockerfile

There are various commands we can use within a Dockerfile. The first command is the FROM statement which indicates which image should be used when creating your instance. I’m going to use the ubuntu image which I have previously downloaded to my local Docker server.

FROM ubuntu:14.04

Add MAINTAINER or author for the template. This is your name, username or whatever handle you’d like to be known as.

MAINTAINER James Coyle <[email protected]>

We are now going to use the RUN command to specify the commands that should be executed on the instance during creation. The commands will be executed in the order they appear in the Dockerfile. We will be installing Apache2 so we’ll be using the apt-get command to install.

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y apache2

Next we’ll make a data directory on the host where we will keep our web files that are to be served by Apache.

RUN mkdir -p /data/apache/www
RUN chown -R root:www-data /data/apache/www

It’s a good idea to separate the Docker container from any user data so that a the container can be used for different purposes. What this mean in our example is that we will keep all the website data (HTML files, etc.) out of our container, leaving only the Apache software and general configuration within the container. This means that we can reuse our Docker image to create containers for other websites.

Using the Docker VOLUME command we can map a directory on the Docker host to a folder inside the container which will be configured once your container is created. The below example makes the directory /data/apache/www available for mapping later.

Add the VOLUME reference to your Dockerfile.

VOLUME /data/apache/www

We will need to be able to reach our container on port 80 so that we can use the Apache service over the network. Docker uses the EXPOSE command followed by a port number to allow incoming traffic to the container. Add the below entry to allow port 80.

EXPOSE 80

We now need to do some find and replace magic to change the Apache default site configuration to point to our new location, rather than the Apache default. This isn’t a Docker specific command, but is required for this example.

RUN sed -i 's#DocumentRoot /var/www#DocumentRoot /data/apache/www#' /etc/apache2/sites-available/000-default.conf

Finally we’ll need to tell Docker what should be executed in order to ‘run’ this container.  For this example, we use the apache2ctl command with the FOREGROUND switch.

ENTRYPOINT /usr/sbin/apache2ctl -D FOREGROUND

And that’s it, your first DockerFile. Run your newly created DockerFile to build the image by changing to the project directory and using the docker build command to create it. Use the -t switch to specify a tag for the image.

cd /home/james/dockerfiles/apache2
docker build -t apache2:test .

It will take a few minutes for the image to build. Once complete, you’ll be able to see it in the Docker image list by using the command docker image.

root@docker:~/apache2# docker images
REPOSITORY  TAG  IMAGE ID     CREATED        VIRTUAL SIZE
apache2     test fdf56ad12ffa 4 minutes ago  228.2 MB

 

The whole DockerFile:

FROM ubuntu:14.04

MAINTAINER James Coyle <[email protected]>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y apache2

RUN mkdir -p /data/apache/www
RUN chown -R root:www-data /data/apache/www

VOLUME /data/apache/www

EXPOSE 80

RUN sed -i 's#DocumentRoot /var/www#DocumentRoot /data/apache/www#' /etc/apache2/sites-available/000-default.conf

ENTRYPOINT /usr/sbin/apache2ctl -D FOREGROUND

 


My Proxmox VE Bash Scripts on GitHub

Category : Knowledge

Get Social!

octocat-githubI’ve created a public repository on GitHub with few Bash scripts which may (or may not) be helpful for anyone administering a Proxmox VE Server.

I’ll add to this repository over time with any new scripts I write. Let me know if they are useful, or if you have any of your own in the comments.

Currently the scripts are:

  • backup-all – will backup all OpenVZ Containers on the Proxmox cluster.
  • ct-disk-space – a print out of the diskspace used by all OpenVZ Containers on the server.
  • migrate – will migrate a Container to a different storage ID.
  • restore-all -will restore all OpenVZ Containers on a cluster to the latest available backup.

Link to GitHub proxmox-scripts repository: https://github.com/JAC2703/proxmox-scripts


Bash getops Example

Tags :

Category : How-to

Get Social!

getopts is a way of adding intuitive options to custom bash scripts in Linux. getopts is a command that’s built into all recent versions of Bash, and is a more recent version of getopt (notice there is no ‘s’).

You’re not able to do anything too fancy with getopts, such as GNU long options like –filter, but you are able to easily pass in semi-meaningful arguments with relative ease.

Let’s take a look at the below example to understand what we’re getting into. 

Calling getopts Scripts

So that we know what we’re getting into, lets start with calling a script that accepts 3 arguments.

  • A port
  • A Hostname
  • A force parameter

The script we’re creating is just an example usage for getopts so each of the above parameters won’t really do anything, so I’ll leave your imagination to come up with a reason for wanting to collect such information. That said, we’re going to call our example script send_curl_command.sh and it will expect the above mentioned parameters to be passed in to work. 

send_curl_command.sh -p 443 -h api.jamescoyle.net -f

getopts Example

#!/bin/bash

# Echo usage if something isn't right.
usage() { 
    echo "Usage: $0 [-p <80|443>] [-h <string>] [-f]" 1>&2; exit 1; 
}

while getopts ":p:h:f" o; do
    case "${o}" in
        p)
            PORT=${OPTARG}
            [[ $PORT != "80" && $PORT != "443" ]] && usage
            ;;
        h)
            HOST=${OPTARG}
            ;;
        f)  
            FORCE=1
            ;;
        :)  
            echo "ERROR: Option -$OPTARG requires an argument"
            usage
            ;;
        \?)
            echo "ERROR: Invalid option -$OPTARG"
            usage
            ;;
    esac
done
shift $((OPTIND-1))

# Check required switches exist
if [ -z "${PORT}" ] || [ -z "${HOST}" ]; then
    usage
fi

echo "p = ${PORT}"
echo "h = ${HOST}"

The above snippet shows a relatively simple example of how to use getopts and can be broken into 3 main parts.

usage is a bash function that echos the script name, and some literal text on which parameters are accepted and then exits the script. The idea here is to call this function if anything is wrong with the arguments passed, so that a helpful hint can be given to the user and the script can exit.

The if block at the bottom of the script runs a few checks on the switches to ensure that they exist. The -z attribute is a simple bash test to ensure that the variable is set – it doesn’t do anything more clever than that. In addition, in the middle block you’ll find another check  [[ $PORT != “80” && $PORT != “443” ]] which checks that the p variable is set and is equal to either 80 or 443. You can check in either location, or both, just remember to call usage if something isn’t right so that the script can exit.

That means only the central block of code remains – and this is the meat of the getopts command. Much of this block can be considered boilerplate code, however there are 2 parts you’ll need to edit: the first line of the code between the quotation marks, which are the parameters that instruct getopts how to work, and the case statement. 

getopts Arguments and case Statement

The quoted arguments following the getopts command can look fairly cryptic at first, but once it’s broken down into its individual parts it’s really easy to build up the functionality you need. Keeping with the above example, let’s take a look at the getopts arguments “:p:h:f” o.

  • “:” (colon) is there to denote if getopts should throw an error if parameters are missing, which are specified as required (more on that in the last bullet point below). If you omit the : then getopts would throw generic errors – something that we’ll handle manually for this example to provide a more meaningful output in the context of our script.
  • “p:” is the first parameter that we tell getopts to expect. p specifies the character that will be passed on the command line when calling the script – for example myscript.sh -p. The colon specifies that an attribute must follow the switch, for example myscript.sh -p 443. During processing, the value 443 will be parsed with the -p switch.
  • “h:” is exactly the same as the above -p switch, only the character here is p.
  • “f” is, as above, specifying that the -p switch is expected however here there is no colon :. That’s because the -f switch will not have an attribute payload following it, for example 
    myscript.sh -p 443 -h myhost.com -f
  • “o” outside of the parenthesis is a lonely o. This value will be filled with either:
    • The switch being received, if it’s a valid switch such as ph or f.
    • a ? if the switch being passed is not an accepted switch (not specified in the getopts configuration). This is only available if the getopts argument begins with : as per the first bullet point above.
    • : if the switch is acceptable but doesn’t contain an argument when it should (configured with a : such as “p:”). This is only available if the getopts argument begins with : as per the first bullet point above.

That’s all the configuration needed for the getopts command to function however, to make useful we need to understand the arguments that are passed in, check them to ensure they’re sensible and make them available for the script to use.

case Statement

The case statement is called multiple times for each argument of getopts. In the above example, the first case check is for p which would be called when the -p argument is passed with variables. The code will then check that p is either 80 or 443 and set the variable PORT to equal the value passed in. case items h and f should be easy enough to understand and are very similar to the above p

Where things get more interesting are with the bottom two case items:

  • : (colon) here is used to denote that a switch has been passed into the script that’s expected, however an argument was expected but was not found. In this code block we simply call the usage function to remind the user what the script expects and to exit.
  • \? is simply a question mark but escaped for bash.  This represents that the switch currently being processed is not expected. Again, we just call our usage function.
  • * (asterisk) is not actually used in our sample script but it’s essentially a default which would catch anything that hasn’t been caught by a previous case element. You could use it to combine the above * and ?.

Proxmox 4.0 Beta 1 is Now Available

Category : Tech News

Get Social!

proxmox logo gradThe latest release of Proxmox VE has entered it’s first BETA today with the release of Proxmox VE 4.0 BETA 1.

It’s the first public release of version 4.0 and comes with some pretty big changes!

HA Clusters are now configured using the Proxmox Web GUI and don’t require separate fencing kit – it’s provided out of the box. It’s about time, in my opinion, as plenty of other software manages HA without 3rd party processes to tell you what’s up and what’s down. This is a very welcome addition to any small Proxmox Cluster.

Linux Containers will replace OpenVZ as the Proxmox VE container solution. As you’d expect, LXCs will work with the usual Storage plugins offered by Proxmox, although Sheepdog, Ceph, iSCSI, DRBD and GlusterFS will be in a future release.

There are some other updates, such as the latest DRBD9 packages are shipped, and the whole thing runs on the latest Debian release, Debian Jessie 8.1.

You can download the ISO from their download site and install it onto a new machine. Unfortunately, because it’s still a BETA product there is no upgrade path from 3.4. I’d expect this to change in the near future.

It’s worth noting, however, that the website states that any BETA version of release 4.0 will be upgradeable to the full 4.0 when it’s available.

Quick Poll

Which type of virtualisation do you use?
  • Add your answer


Visit our advertisers

Quick Poll

How many Proxmox servers do you work with?

Visit our advertisers