Storage benchmarking hardware list

So here I am in my office at ipHouse finishing up the last runs of benchmarking reviewing my posts I have already done for Tegile and Tintri. I still need to finish one for Nexsan but won’t have time until later this weekend to complete it.

As I am getting closer to posting the actual results I wanted to share the storage and testing configurations  now instead of including all of this in the results posting.

Please use the comments field if you have any questions or comments on the hardware listed in this post.

All testing done used NFS as not only is it the common denominator but also what we use in our production network for our VMware vSphere clusters.

Vendors:

OpenSolaris build 132, my baseline for testing, sets the bar…low

  • single controller
  • 2 single core AMD 240 processors (it is that old…)
  • 8 GiB RAM
  • ZFS
  • 2 9 drive RAIDZ2 with 2 hot-spares using 1 TiB 7200 RPM SATA disks
  • Compression and deduplication are on/off, tunable per share or LUN
  • NFS and iSCSI protocols

Tintri T540

  • active/standby controllers
  • 2 4 core Intel E56xx processors per controller
  • 48 GiB RAM per controller
  • Their own filesystem
  • 8 drive RAID6 with hot-spare using 3 TiB 7200 RPM SAS disks
  • 8 SSD RAID6 with hot-spare used as primary storage
  • Compression and deduplication are both on, non-tunable
  • NFS protocol

Tegile HA2200EP

  • active/active or active/standby controllers
  • 2 4 core Intel E5620 processors per controller
  • 48 GiB RAM per controller
  • ZFS with intellectual property on storing metadata on SSD
  • 6 drive RAID10 with 2 hot-spares using 2 TiB 7200 RPM SAS disks
  • 2 200 GiB SAS SSDs used for metadata (186 GiB x 2 in RAID1)
  • 4 200 GiB SAS SSDs used as read (176 GiB x 4 in RAID0) and write (10 GiB x 4 in RAID10) caches
  • Compression and deduplication are both on, tunable by share or LUN
  • NFS, iSCSI, F/C, and CIFS/SMB protocols

Nexsan E5110

  • active/standby controllers
  • 1 4 core Intel E5506 processor per controller
  • 12 GiB RAM per controller
  • ZFS
  • 12 drive RAID10 with 2 hot-spares using 2 TiB 7200 RPM SAS disks
  • 2 100 GiB SSDs used as one each for read and write cache
  • Compression and deduplication are both off, non-tunable
  • NFS and CIFS/SMB protocols

I completed all tests using a single Dell 2900 series server running VMware vSphere 5.0 (current build) with 32 GiB of RAM, 1 4 core E5420 processor, and 4×146 (RAID5) GiB 15K RPM SAS disks for booting and standalone VM storage. I used a single 1Gbps port connected to a backend network that housed all the above storage appliances. They were each connected via a single 1Gbps port to the switch.

I tested using 2 different VMs running FreeBSD 8 and Ubuntu 10.04.

I built the virtual systems with 2 GiB RAM, 2 vCPUs, 32 GiB boot disk stored on the local VMFS5 volume (above). I’d edit each VM and add another VMDK stored on the storage appliances.  Each benchmark ran multiple times and the best results for comparison.

I spent many hours writing my own benchmarks but eventually tossed it all out in favor of output that other people can understand and relate to. I chose bonnie++ and postmark for this. Each exercises different aspects of a ‘server’ and ‘filesystem’. I have separated the results into different sections based on the VM operating system as I don’t want a comparison of FreeBSD vs a Linux-based system to occur.

That’s a lot of hardware to test!

Comments are closed.