Use DD to Quickly Benchmark Your CPU

  • 0

Use DD to Quickly Benchmark Your CPU

Category : How-to

Get Social!

Let me start by saying… this is a quick and dirty method and shouldn’t be used for precise comparisons of CPU performance. That said, it’s perfectly adequate for approximating CPU performance, especially on Cloud hosted VPSs to ensure you’re getting the horsepower that you’re being promised.

The idea here is to force your machine to perform tasks that will be computationally expensive to force your CPU to work at 100 percent and become the bottleneck for the task (rather than disk I/ O, etc.). This task will then be timed. The shorter times will generally represent faster CPU’s and longer results would indicate a slower CPU.

CPU benchmark

The md5sum command is a tool that creates an MD5 hash of some data. We can generate some data on the fly with dd and pipe it into the md5sum tool to create a computationally expensive task. We’ll limit the data to hash and time the length of time it takes to create the hash.

Run the below to start the test. If your result completes in under 2 seconds then increase the count=1k value to a higher value, for example count=10k.

You’ll get an output similar to the below output.

There are a couple of items that you’re interested in here, and the rest can be ignored.

  • 2.38909 s is the time it took in seconds for the operation to complete. This is the number to use for comparison with other machines – lower is better.
  • 449 MB/s is the speed that the data was fabricated and push into the md5sum tool to be hashed – higher is better.

 

CPU details with cpuinfo

Linux has various nuggets of information about your system available in the proc directory on a linux root partition. You can cat various files, such as /proc/cpuinfo, to see system specifications and metrics.

The output will look similar to the below output that shows a Xeon CPU running at a clock speed of 2.50GHz.

 


  • 1

qcow2 Physical Size With Different preallocation Settings

Get Social!

The qcow2 image format is the defacto image format for KVM/ QEMU virtual machines. The format provides various parameters that can be configured when creating the image, each with their benefits and drawbacks.

The below section describes the preallocation attribute and how it can effect the size and performance of a virtual machine.

Please see this blog post for more information on preallocation, and then continue on to the results!

The below tests are all performed on the same hardware and on a single hard disk that’s on it’s own dedicated bus with no other traffic. The disk itself is a mechanical Western Digital Green 2TB. I’ve done it on this rather than on an SSD so that the results are more dramatic so that we can understand how much IO performance makes a difference. The tests are also using the same size virtual disk image of 4GB, encryption is disabled, cluster_size is the default 65536 and lazy_refcounts are off unless otherwise specified.

Virtual Disk Creation Time

The first example shows how long it takes to create each virtual disk image and how much physical disk space is being used/ reserved for the image.

preallocation setting Time to create Physical size on disk
off 0.312s 196K
metadata 0.507s 844K
falloc 0.015s 4.0G
full 39.402s 4.0G

As you can see, it takes a huge amount of time to use the full allocation setting because the filesystem it’s being written to has to assign the full size of the file and write empty data to it (in our case around 4GB). The least is taken up with falloc and that’s because qemu-img uses the underlying filesystems fallocate function to allocate the disk space without having to write data to consume the full size.

You can download the bash script used for the above test Disk Test preallocation Disk Size Script.

Virtual Disk Performance

The next thing to consider is the performance of each virtual disk type. For this test each virtual disk is mounted and written to using dd. The performance hit here is when the virtual disk has to expand and allocate physical disk space for new data clusters and new metadata, with metadata creation being by far the biggest overhead.

preallocation setting Time to create MB/ s
off 184.23s 729kB/ s
metadata 85.87s 1.6MB/ s
falloc 100.77s 1.3MB/ s
full 84.31s 1.6MB/ s

You can immediately see that virtual disks with no preallocation take by far the longest to write to, and virtual disks with full preallocation are the quickest. Interestingly a preallocation value of metadata is a very close second to full which indicates much of the performance hit is down to assigning and managing metadata.

You can download the bash script used for the above test Disk Test preallocation Write Performance.

 

 


  • 4

Benchmark MySQL server Performance with Sysbench

Get Social!

mysql-logoYou can spend hours tweaking the settings of a MySQL server instance to get the best possible performance for your hardware and environment. The hardest part is to ensure that the changes made are reflected with increased performance.

To ensure each change results in better performance of the MySQL server we need to measure the performance of the MySQL server before and after the change.

There are a verity of tools to automate MySQL benchmarking, one of which is Sysbench. I will be demonstrating the tests on a Debian 7 system however Sysbench will work on most common Linux distributions. Sysbench can be used to test both InnoDB or MyISAM database types in either a single server environment or a clustered environment with a single instance.

Installing Sysbench will differ on each Linux distribution; it can be downloaded and built from source from Sourceforge or installed with apt-get on Ubuntu or Debian.

Login to MySQL using the CLI or your favorite GUI tool and create a new database which will be used for the test. If you already have a database you can use for the test then you can skip this step. This example will use a database called dbtest for the tests.

The next step is to use the prepare statement with sysbench to generate a table in the specified database which will be used when performing tests.

From the command line, run the below command changing [USER] and [PASSWORD] to your MySQL access credentials.

This has created a table called sbtest with 1000000 rows of data which will be used for testing. The below commands show the the created table and do not need to be executed.

The next step is to being the performance tests. There are multiple parameters which can be changed to alter the test performed but we will do a simple read write test. Again you will need to change [USER] and [PASSWORD] to your MySQL access credentials.

To perform a read only test, change the above parameter oltp-read-only=off to oltp-read-only=on.

The results will look similar to the below output. The main statistic to look for is transactions which shows the number of transactions the test managed to complete, and how many per second.

Finally, you need to clean up your test area. If you can drop the entire database which was used for testing then login to MySQL and run the below command.

If you are unable to drop the whole database then Sysbench comes with a cleanup command. Again you will need to change [USER] and [PASSWORD] to your MySQL access credentials.

 


  • 0

Simple Bonnie++ Example

Category : How-to

Get Social!

Linux penguinBonnie++ is a disk and file system benchmarking tool for measuring I/O performance. With Bonnie++ you can quickly and easily produce a meaningful value to represent your current file system performance.

Before using Bonnie++ make sure that you have it installed on your system. In Ubuntu, use apt-get to install the bonnie++ package.

Run the bonnie++ command  with the following attributes:

  • [TEST_LOCATION] – is where bonnie++ will create the benchmark operations.
  • [TEST_SIZE] – the size of the test file – this should be greater than double the RAM in your system.
  • [TEST_NAME] – this is simply a label which will be written out with the results.
  • [TEST_USER] – the user who should perform the test. This is not required if you are not running as root.

For example:

The easiest way to understand the results of a bonnie++ test is to run the output  through the bon_csv2html utility. This perl script uses the bonnie++ results and generates a HTML page which you can later open with your web browser.

Copy the last line of the bonnie++ output into the echo command to replace [RESULTS] and alter the [OUTPUT] path to point to where you would like to save your results.

Example command:

Finally open the output file with your web browser.

bonnie-results-html

See my other post on using bonnie++ to benchmark your file system.


  • 12

Benchmark disk IO with DD and Bonnie++

Get Social!

Benchmarking disk or file system IO performance can be tricky at best. The problem is that modern file systems leverage various techniques to ensure that the best performance is achieved such as caching files in RAM. This means that unless you circumvent the disk cache, your reported speeds will be reporting how quickly the files can be read from memory.

In this example, I’ll cover benchmarking a Linux file system using two methods; dd for the easy route, and bonnie++ for a more comprehensive test.

dd

Write

You can use dd to create a large file as quickly as possible to see how long it takes. It’s a very basic test and not very customisable however it will give you a sense of the performance of the file system. You must make sure this file is larger than the amount of RAM you have on your system to avoid the whole file being cached in memory.

It’s usually installed out-of-the-box with most Linux file systems which makes it an ideal tool in locked-down environments or environments where it’s tricky to get packages installed onto. Use the below command substituting [PATH] with the filesystem path to test, [BLOCK_SIZE] with the block size and [LOOPS] for the amount of blocks to write.

A break down of the command is as follows:

  • time – times the overall process from start to finish
  • of= this is the path which you would like to test. The path must be read/ writable.
  • bs= is the block size to use. If you have a specific load which you are testing for, make this value mirror the write size which you would expect.
  • sync – forces the process to write the entire file to disk before completing. Note, that dd will return before completing but the time command will not, therefore the time output will include the sync to disk.

The below example uses a 4K block size and loops 2000000 times. The resulting write size will be around 7.6GB.

Now, let’s do the math. dd tells us how many bytes were written, and the time command tells us how long it took – use the real output at the bottom of the output. Use the formula BYTES / SECONDS. For these larger tests, convert bytes to KB or MB to make more sensible numbers.

(8192000000 / 1024 / 1024) / ((2 * 60) + 41.618)

Bytes converted to MB / (2 minutes + 41.618 seconds)

This gives us an average of 48.34 megabytes per second over the duration of the test.

Read

We can also use dd to test the read speed of a disk by reading the file we created and timing the process. Before we do that, we need to flush the file cache by writing another file which is about the size of the RAM installed on the test system. If we don’t do this, the file we just created will be partially in RAM and therefore the read test will not be completely read from disk.

Create a file using dd which is about the same size as the RAM installed on the system. The below assumes 2GB of RAM is installed. You can check how much RAM is installed with free.

Now for the read test of our original file.

And process the time result the same was as when writing.

Bonnie++

Bonnie++ is a small utility with the purpose of benchmarking file system IO performance. It’s commonly available in Linux repositories or available from source from the home page.

On Debian/ Ubuntu based systems, use the apt-get command.

Just like with DD, we need to minimise the effect of file caching and therefore the tests should be performed on datasets larger than the amount of RAM you have on the test system. Some people suggest that you should use datasets up to 20 times the amount of RAM, others suggest twice the amount of RAM. Whichever you use, always use the same dataset size for all tests performed to ensure the results are comparable.

There are many commands which can be used with bonnie++, too many to cover here so let’s look at some of the common ones.

  • -d – is used to specify the file system directory to use to benchmark.
  • -u – is used to run a a particular user. This is best used if you run the program as root. This is the UID or the name.
  • -g – is used to run as a particular group. This is the GID or the name.
  • -r – is used to specify the amount of RAM in MB the system has installed. This is total RAM, and not free RAM. Use free -m to find out how much RAM is on your system.
  • -b – removes write buffering and performs a sync at the end of each bonnie++ operation.
  • -s – specifies the dataset size to use for the IO test in MB.
  • -n – is the number of files to use for the create files test.
  • -m – this adds a label to the output so that you can understand what the test was at a later date.
  • -x – is used to repeat the tests n times. Change n to the number of how many times to run the tests.

bonnie++ performs multiple tests, depending on the arguments used, and does not display much until the tests are complete. When the tests complete, two outputs are visible. The bottom line is not readable (unless you really know what you are doing) however above that is a table based output of the results of the tests performed.

Let’s start with a basic test, telling bonnie++ where to test and how much RAM is installed, 2GB in this example. bonnie++ will then use a dataset twice the size of the RAM for tests. As I am running as root, I am specifying a user name.

bonnie++ will take a few minutes, depending on the speed of your disks and return with something similar to the output below.

The output shows quite a few statistics, but it’s actually quite straight forward once you understand the format. First, discard the bottom line (or three lines in the above output) as this is the results separated by a comma. Some scripts and graphing applications understand these results but it’s not so easy for humans. The top few lines are just the tests which bonnie++ performs and again, can be discarded.

Of cause, all the output of bonnie++ is useful in some context however we are just going to concentrate on random read/ write, reading a block and writing a block. This boils down to this section:

The above output is not the easiest output to understand due to the character spacing but you should be able to follow it, just. The below points are what we are interested in, for this example, and should give you a basic understanding of what to look for and why.

  • ubuntu is the machine name. If you specified -m some_test_info this would change to some_test_info.
  • 4GB is the total size of the dataset. As we didn’t specify -s, a default of RAM x 2 is used.
  • 17094 shows the speed in KB/s which the dataset was written. This, and the next three points are all sequential reads – that is reading more than one data block.
  • 15431 is the speed at which a file is read and then written and flushed to the disk.
  • 37881 is the speed the dataset is read.
  • 548.4 shows the number of blocks which bonnie++ can seek to per second.
  • Latency number correspond with the above operations – this is the full round-trip time it takes for bonnie++ to perform the operations.

Anything showing multiple +++ is because the test could not be ran with reasonable assurance on the results because they completed too quickly. Increase -n to use more files in the operation and see the results.

bonnie++ can do much more and, even out of the box, show much more but this will give you some basic figures to understand and compare. Remember, always perform tests on datasets larger than the RAM you have installed, multiple times over the day, to reduce the chance of other processes interfering with the results.