Review: VMware Virtual SAN turns storage inside-out

VMware's VSAN 1.0 combines easy setup and management with high availability and high performance -- and freedom from traditional storage systems

virtual data wave pattern
Thinkstock
At a Glance
  • VMware Virtual SAN 1.0

VMware Virtual SAN 1.0

Convergence of compute and storage is all the rage in the virtualization market these days. You see it in Microsoft's Windows Server 2012 R2 with Hyper-V and Storage Spaces. You see it in third-party platforms such as Nutanix. And you see it in VMware's vSphere flagship with the addition of Virtual SAN, a new capability built into the ESXi hypervisor that turns the direct-attached storage in vSphere cluster nodes into highly available, high-performance shared storage.

The goals behind Virtual SAN, or VSAN, are both to lower overall storage costs and to eliminate the I/O latencies associated with networked storage. VSAN achieves high availability by replicating storage objects (virtual machine disks, snapshot images, VM swap disks) across the cluster, allowing admins to specify the number of failures (nodes, drives, or network) to be tolerated on a per-VM basis. It addresses latency by leveraging flash-based storage devices for write buffering and read caching, along with support for 10GbE network connectivity.

[ Virtualization showdown: Microsoft Hyper-V 2012 vs. VMware vSphere 5.1 | Review: VMware vSphere 5.5 adds speed and usability | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ]

VSAN requires a minimum of three vSphere nodes to form a clustered data store. Each node in the cluster must have both SSD and HDD storage in order to join. To turn VSAN on requires enabling a single check box from the settings page for the vSphere cluster. You then select either Automatic or Manual for adding disks to the VSAN storage pool, and you're done. It's that simple.

VSAN, at least in its initial release, targets a short list of use cases. Not surprisingly, VDI (virtual desktop infrastructure) is the showcase scenario, with VMware's Horizon View product the first to take advantage of the new product. VMware even includes VSAN in the Advanced and Enterprise SKUs of Horizon View 6. Starting with version 5.3.1, Horizon View is specifically designed for use with Virtual SAN 5.5 data stores, meaning you'll need the latest ESXi 5.5 Update 1 to run the two together.

For this review, I was provided with hardware from Supermicro and Lenovo. The Supermicro system is a SuperServer SYS-F627R3-R72B+ with four independent nodes in a single 4U chassis. Each node has two Intel Xeon 2420 CPUs, 256GB of memory, five 2TB Seagate SAS 10K HDDs, and one 400GB Intel S3700 Series SATA SSD, along with two 10GbE and two 1GbE network interfaces. In addition to the SuperServer, Supermicro provided one of its SSE-X3348T 48-port 10GBase-T switches to connect the four nodes. Lenovo provided three ThinkServer RD340 1U servers, each with one Intel Xeon E5-2420 CPU, 64GB of memory, one 1TB Toshiba SAS 7200RPM HDD, one 100GB STEC M161SD2-100UCM SATA SSD, and three 1GbE network interfaces.

Note that the single HDD per node in the Lenovo cluster, while supported by VMware, is not recommended. For even a low-end VSAN node (supporting up to 15 VMs and 2K IOPS per node), VMware recommends at least five 1TB HDDs (NL-SAS). You will likely want more RAM, a larger SSD, and more network (i.e., 10GbE) than my Lenovo nodes have as well.

InfoWorld Scorecard
Management (20%)
Performance (20%)
Availability (20%)
Scalability (20%)
Setup (10%)
Value (10%)
Overall Score
VMware Virtual SAN 1.0 9 9 10 10 8 8 9.2
At a Glance
  • VMware Virtual SAN 1.0

1 2 3 4 5 6 Page 1
Page 1 of 6