Benchmarking storage results, part 2

The follow-up from part 1 is here for you to digest. Soon ipHouse will have new storage online!

This report is where I describe the hardware and guest operating systems in a different format and include the output from bonnie++ and Postmark. I opted to skip the Iometer benchmarks because then this post would just be multiple screenshots or I would need to parse it all into graphs.

Tintri T540 vs Tegile Zebi HA2200EP vs Nexsan E5000 series, and an old OpenSolaris 132 system. Please check the hardware link provided to read up on the specifications of each appliance.

Test system configuration:

physical host (Dell 2900) running vSphere 5.0

32 GiB RAM

1 4-core Intel E5420 2.5Ghz, no hyperthreading

4 146 GiB 15K RPM disks in RAID5 for physical boot and base VM storage

6 1Gbps ethernet interfaces (1 used for the NFS storage network)

Ubuntu 10.04.4 LTS 64-bit

2 GiB RAM

1 vCPU

32 GiB base system disk, VMDK on local storage, VMware paravirtual SCSI controller

1 E1000 ethernet for remote SSH connections

FreeBSD 8.2-STABLE 64bit

2 GiB RAM

1 vCPU

32 GiB base system disk, VMDK on local storage, LSI SAS controller

1 E1000 ethernet for remote SSH connections

Tests run software and settings

bonnie++

bonnie++ -u nobody -d <directory> -m bm–(f|l)-(os|tintri|tegile|nexsan) -n 10:102400:1024:1024

Postmark

set size 10000 10000000

set number 2000

set transactions 2500

I ran each benchmark ran multiple times and I included the best (complete) results for the output. I ran these tests at least 10 times per storage appliance.

Bonnie++ output

Ubuntu 10.04.4 LTS bonnie++ statistics

FreeBSD 8.2-STABLE bonnie++ statistics

Flat text files with output from bonnie++ and Postmark

 Raw benchmark output

An interesting tidbit – creation of 2 TiB filesystems was surprising to me was FreeBSD taking 21-22 seconds and Ubuntu taking 5 minutes 40 seconds to do the same. I could recreate this on each system I tested against. There was variability in the 1-2 second per run or per storage appliance though not predictable. Another (surprisingly) big difference besides time to execute was that FreeBSD generated a very high spike in IOPS for about 3 seconds and only 50-60MB/sec in throughput while Ubuntu’s mkfs.ext4 command generated high sustained IOPS saturating the 1Gbps ethernet (over 100MB/sec) for the duration.

I also found that there wasn’t a measurable difference between giving the test VMDK its own virtual SCSI controller vs using the single edition. I don’t know why I would have expected any difference. Running the benchmarks in parallel on 2-4 VMDKs only added latency once the 1Gbps pipe was full, which I did expect. Once I aggregated the data, the throughput and iteration runs matched up with only latency really affected. I walked away with: VMware does a good job on their virtual SCSI adapters.