Category Archives: How-to

Change the Password for an OpenVZ Container

Category : How-to

Get Social!

openvz-logo-150px_new_3If you have forgotten the password for an OpenVZ container – relax! Help is at hand.

Luckily OpenVZ makes it very easy to set or reset the password for any user of a container. You’ll need access to the terminal on the hardware node which is running the container to run a simple vzctl command.

vzctl is the CLI command which is used to configure and control an OpenVZ container. Using the –userpassword switch we can reset a users password. We can also use this command to create a new user if the specified user does not already exist.

vzctl set [VMID] --userpasswd [USER]:[PASSWORD]

Run the above command and substitute the following values for your own:

  • [VMID] is the ID of the container to set the new password on.
  • [USER] is the name of the user that you’d like to change the password for. If this user doesn’t exist then a new user will be created.
  • [PASSWORD] is the password to set for the [USER].

Note: The container needs to be running for this command to work as the user information is saved within the container and not in the containers configuration file like many of the other vzctl commands. If the container is not already running, this command will start it.

 


Remove a Windows Service

Get Social!

windows-logoYou can easily remove a Windows service from the Windows registry using a simple command prompt command called SC.

SC is the command line utility which interacts with the Windows Service Controller and can be used to manage services on either a remote or local Windows machine.

You will need to find the name of the service you would like to remove before you can issue the SC command to remove it. You can either use the Windows Service console or the sc query command to find the name of a service.

Click the Start menu and type in Services and click the Windows Services console in the list. Find your service by name and double click it to see the Properties window. The service name can be seen at the top of the form. The below example shows the Skype Updater service.

services-skype-properties

To use the sc query command to find your service name, open up a command prompt and type the below command.

sc query

Find the SERVICE_NAME attribute of your required service.

Once you have the service name, you will need to stop the service before you can delete it. Issue the sc stop command followed by the service name. The below example show how to stop the SkypeUpdate service.

sc stop SkypeUpdate

Finally, you can issue the sc delete command to remove the service.

sc remove SkypeUpdate

Add Create OpenVZ Template to the Proxmox Web GUI

Get Social!

proxmox logo gradCreating a template from an OpenVZ container is a very manual process. My biggest problem is that you have to have root access to the Proxmox hardware node in order to create a tar from the root of the CT. See How to make a new OpenVZ template for more information on manually creating a template.

proxmox-create-template-context-menu

I created a small code patch for the Proxmox API and web GUI to add a ‘create template’ feature for CTs. The code adds a context menu entry when you right click on a CT in the Proxmox web GUI.

Before using the feature, the CT must be shut down and any network interfaces removed. The feature presents the user with a dialogue box requesting which storage device the template should be saved to, and what it should be called.

Once the storage has been selected and the template has been given a name, a new ‘create template’ task is created which archives the root directory of the selected container and adds it to the cache folder of the selected storage.

proxmox-create-template-dialogue-box

The changes were declined by the Proxmox team on the grounds that creating a template is a technical process and may not result in creating a working, cloned instance. In addition, it is very easy to leave sensitive information in the CT which is the source of the template – all data on the CTs file system will be archived into the template making it available the next time a CT is created. If SSH keys are left on the CT, for example, then they will be available in the new CT also.

Because the feature was not accepted into the main distribution of Proxmox, I will maintain it myself and manually apply the patches to my Proxmox servers after every update. I have created a public repository on my Gitlab server that holds the git patch file which is available for everyone. 

If you accepts the risks mentioned above, and are happy to hack away at your Proxmox binaries, then you are welcome to try the patch for yourself.

You can download the patch and get more information on my public Proxmox Github page.

I should mention that this patch may not always be up to date. In addition, this is changing the actual Proxmox distribution files and as such may have unintended side effects. Please use these patches with caution and only in your development environments.

gitlabDownload

 


Gitlab error “fatal: The remote end hung up unexpectedly” Again

Category : How-to

Get Social!

gitlabI previously wrote in this blog post about how to fix an error with Gitlab. The error was presented when using the  git push command with a remote repository that uses the Gitlabs HTTP protocol and not the SSH protocol.

The following error was presented in the Git client when using the git push command:

git push -u origin master
Counting objects: 556, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (539/539), done.
efrror: RPC failed; result=22, HTTP code = 413367.00 KiB/s
atal: The remote end hung up unexpectedly
Writing objects: 100% (556/556), 1.45 MiB | 367.00 KiB/s, done.
Total 556 (delta 282), reused 0 (delta 0)
fatal: The remote end hung up unexpectedly
Everything up-to-date

It seems that the issue is Nginx, Gitlab’s HTTP server, is not configured to receive large amounts of data. We need to specify the attribute client_max_body_size in Gitlab’s Nginx configuration file and specify the maximum amount of data Nginx will accept.

Open the configuration file and find the line location @gitlab.

vi /etc/nginx/sites-available/gitlab

Add the client_max_body_size attribute and specify the size value to use.

client_max_body_size 20M

The M stands for megabyte – you can also use G for gigabytes.

If the size of your git push ever exceeds the above value, you will have to either increase the size further or reduce your git commit sizes.

Your completed /etc/nginx/sites-available/gitlab file should look like the below example which has a 20MB upload limit.

upstream gitlab {
  server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}

server {
  listen *:80 default_server;         # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
  server_name YOUR_SERVER_FQDN;     # e.g., server_name source.example.com;
  server_tokens off;     # don't show the version number, a security best practice
  root /home/git/gitlab/public;

  # Increase this if you want to upload large attachments
  # Or if you want to accept large git objects over http
  client_max_body_size 20m;

  # individual nginx logs for this gitlab vhost
  access_log  /var/log/nginx/gitlab_access.log;
  error_log   /var/log/nginx/gitlab_error.log;

  location / {
    # serve static files from defined root folder;.
    # @gitlab is a named location for the upstream fallback, see below
    try_files $uri $uri/index.html $uri.html @gitlab;
  }

  # if a file, which is not found in the root folder is requested,
  # then the proxy pass the request to the upsteam (gitlab unicorn)
  location @gitlab {
    proxy_read_timeout 300; # Some requests take more than 30 seconds.
    proxy_connect_timeout 300; # Some requests take more than 30 seconds.
    proxy_redirect     off;

    proxy_set_header   X-Forwarded-Proto $scheme;
    proxy_set_header   Host              $http_host;
    proxy_set_header   X-Real-IP         $remote_addr;
    proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;

    proxy_pass http://gitlab;
  }

  error_page 502 /502.html;
}

 


Windows Error: “The program can’t start because WMVCORE.DLL is missing from your computer” in Windows Server 2008″

Get Social!

I recently received the below error when running Blue Iris CCTV software on Windows Server 2008.

The program can’t start because WMVCORE.DLL is missing from your computer. Try reinstalling the program to fix this problem.

wmvcore.dll-missing

The error is caused by not having the Windows Media Player libraries installed. You can easily resolve the error by installing the Desktop Experience for Windows Server 2008 which includes Windows Media Player.

Click the start menu and start to type Turn Windows features on or off and click it when it appears in the menu.

turn-windows-features-on-or-off

Click Features and click Add Features. Find the Desktop Experience entry and tick the check box. Click Add Required Features in the new dialogue which pops up.

desktop-experience

Click Next and then Install to install the required features. Click Close and then Yes to restart your computer and complete the installation.

When your computer restarts, click Close and Windows Media Player will now be available.

desktop-experience-complete


Ceph Storage on Proxmox

Get Social!

ceph-logoCeph is an open source storage platform which is designed for modern storage needs. Ceph is scalable to the exabyte level and designed to have no single points of failure making it ideal for applications which require highly available flexible storage.

Since Proxmox 3.2, Ceph is now supported as both a client and server, the client is for back end storage for VMs and the server for configuring storage devices. This means that a Ceph storage cluster can now be administered through the Proxmox web GUI and therefore can be centrally managed from a single location. In addition, as Proxmox now manages the Ceph server the config can be stored in Proxmox’ shared file system meaning that the configuration is immediately replicated throughout the entire cluster.

The below diagram shows the layout of an example Proxmox cluster with Ceph storage.

  • 2 nodes are used dedicated to running VMs and use the Ceph storage hosted by the other nodes.
  • Two networks are used, one for management and application traffic and one for Ceph traffic only. This helps to maintain sufficient bandwidth for storage requirements without affecting the applications which are hosted by the VMs.

ceph-infrastructure-proxmox

Before getting started with setting up the Ceph cluster, we need to do some preparation work. Make sure the following prerequisites are met before continuing the tutorial.

  • You have Proxmox cluster with the latest packages from the pvetest repository. You must have at least three nodes in your cluster. See How to set up a cluster.
  • SSH Keys are set up between all nodes in your cluster – Proxmox does this automatically as part of clustering but if you are using a different user, you may need to set them up manually.
  • NTP is set up on all nodes in your cluster to keep the time in sync. You can install it with: apt-get install ntp

promox-ceph-3-nodesThe rest of this tutorial will assume that you have three nodes which are all clustered into a single Proxmox cluster. I will refer to three host names which are all resolvable via my LAN DNS server; prox1, prox2 and prox3 which are all on the jamescoyle.net domain. The image to the left is what is displayed in the Proxmox web GUI and details all three nodes in a single Proxmox cluster. Each of these nodes has two disks configured; one which Proxmox is installed onto and provides a small ‘local’ storage device which is displayed in the image to the left and one which is going to be used for the Ceph storage. The below output shows the storage available, which is exactly the same on each host. /dev/vda is the root partition containing the Proxmox install and /dev/vdb is an untouched partition which will be used for Ceph.

root@prox1:~# fdisk -l | grep /dev/vd
Disk /dev/vdb doesn't contain a valid partition table
Disk /dev/mapper/pve-root doesn't contain a valid partition table
Disk /dev/mapper/pve-swap doesn't contain a valid partition table
Disk /dev/mapper/pve-data doesn't contain a valid partition table
Disk /dev/vda: 21.5 GB, 21474836480 bytes
/dev/vda1   *        2048     1048575      523264   83  Linux
/dev/vda2         1048576    41943039    20447232   8e  Linux LVM
Disk /dev/vdb: 107.4 GB, 107374182400 bytes

Now that I’ve set the scene, lets start to put together our Ceph cluster! Before using the new Proxmox web GUI you must run a few SSH commands to set up the initial Ceph instance.

Run the below command on all of the nodes which you will use as a Ceph server. This will download and set up the latest Ceph packages.

pveceph install

Create the Ceph config file by initialising pveceph. The Ceph config file will be created in your /etc/pve/ directory called ceph.conf. You should only run this on one node.

pveceph init --network 192.168.50.0/24

The next step is to set up the Ceph monitors for your cluster. So that you don’t have a single point of failure, you will need at least 3 monitors. You must also have an uneven number of monitors – 3, 5, 7, etc.

pveceph createmon

The rest of the configuration can be completed with the Proxmox web GUI. Log in to your Proxmox web GUI and click on one of your Proxmox nodes on the left hand side, then click the Ceph tab.

proxmox-ceph-status-tab

Next, you will add a disk to the Ceph cluster. Each disk creates to as an OSD in Ceph which is a storage object used later by the Ceph storage pool. Click on the Disks tab at the bottom of the screen and choose the disk you would like to add to the Ceph cluster. Click the Create: OSD button and click Create to create an OSD. Repeat these steps for each Proxmox server which will be used to host storage for the Ceph cluster.

ceph-create-osd

If the Create: OSD button is greyed out, it’s because the disk is not in a state where Ceph can use it. It’s likely because you have partitions on your disk. Run the fdisk command on the disk and press d to delete the partitions and w to save the changes. For example:

fdisk /dev/vdb

The last step in creating the Ceph storage cluster is to create a storage pool. Click the Pools tab and click Create. Enter the below values into the new Create Pool dialogue box:

  • Name: name to use for the storage pool.
  • Size: the number of replicas to use for a working cluster. A replica is the number of times the data is stored across nodes.
  • Min. Size: the minimum of replicas which can be used.
  • Crush RuleSet:
  • pg_num: this is the placement group count which you have to calculate based on the number os OSDs you have. To calculate your placement group count, multiply the amount of OSDs you have by 100 and divide it by the number of number of times each part of data is stored. The default is to store each part of data twice which means that if a disk fails, you won’t loose the data because it’s stored twice.For our example,3 OSDs * 100 = 300
    Divided by replicas, 300 / 2 = 150

ceph-create-pool

The Ceph storage pool is now set up and available to use for your KVM images. You can check the status of the Ceph storage pool by clicking on the Status tab.

ceph-status-screen

See my blog post on mounting Ceph storage on Proxmox.


Visit our advertisers

Quick Poll

What type of VPN protocol do you use?

Visit our advertisers