On July 8, EMC announced the latest new systems in the VMAX family. The new systems are the 100K, the 200K, and the 400K, known collectively as the VMAX3 Family. They are updated versions of the 10K, 20K, and 40K. In addition to all the software enhancements, the systems have all new hardware and packaging. With the new dual engine system bays, the number of bays lines up directly with the names of the systems: the 100K uses 1 bay, the 200K uses up to 2 bays, and the 400K uses up to 4 bays. This post is focused on the hardware and packaging across these new systems.
The VMAX3 Family systems are packaged with one or two engines per system bay, and all bays are 24” wide (19” internal rails). Each bay also contains DAEs holding the disk drives for the engines. These new denser configurations take up much less floor space. For example, the densest drive bay for a 40K holds 400 x 2.5” drives. A new single engine system bay holds up to 720 x 2.5” drives and one engine. A new dual engine system bay holds 480 x 2.5” drives and two engines.
Each VMAX3 engine holds 2 directors with a common set of features:
- Dual Intel Ivy Bridge CPUs (4 per engine)
- Up to 16 host ports (32 per engine)
- Each up to 16Gb FC at launch
- 16 x 6Gb SAS lanes to connect to drives (32 per engine)
- Dual 56Gb InfiniBand Virtual Dynamic Matrix connections (4 per engine)
- Flash drives for vault of write data upon power loss
- Vault flash is mirrored across engines
- Dual redundant power supplies (4 per engine)
The directors for the 100K systems have 6 core CPUs (24 core per engine) and cache up to 512GB (1TB per engine), with 1-2 engines per system. The directors for the 200K systems have 8 core CPUs (32 cores per engine) and cache up to 1TB (2TB per engine), with 1-4 engines per system. The directors for the 400K systems have 12 core CPUs (48 cores per engine) and cache up to 1TB (2TB per engine), with 1-8 engines per system.
For VMAX3 systems, the CPU cores on each director are no longer tied to specific workloads. Hyper-threading is enabled on each CPU, and the system is configured with pools of cores/threads for each function. For example, each director has pools for internal services, disk services, and host services. If there are 10 threads working on host FC data services, all the threads will work on the I/O activity for each of the host ports. If most of the activity is on a single port, then all of the threads will be working to drive those I/Os as quickly as possible. This ensures that VMAX3 systems deliver amazing single port performance as well as great aggregate performance across all of the host ports. VMAX Host I/O Controls will continue to be supported to give storage administrators the ability to control how much of the total performance may be consumed by any one system/cluster.
VMAX3 systems have new dense DAE options, supporting 60 or 120 drives per DAE. The DAEs are designed for online replacement of all active components. They are divided into 4 power zones to allow multiple RAID elements to be placed in the same DAE and still ensure that data remains available in the event of a total loss of a power zone. The DAEs slide out online, allowing for single drive (vertical) replacement. The 120 drive DAEs support 2.5” drives, while the 60 drive DAEs support 3.5” drives or 2.5” drives in 3.5” carriers. The 2.5” drive options are 200GB, 400GB, and 800GB flash drives, 300GB 15k drives, and 300GB, 600GB, and 1200GB 10k drives. The 3.5” drive options are 2TB and 4TB 7k drives.
The purpose of all the drive options is to be able to support customer workloads with very different I/O patterns. Some have a relatively low I/O density (a general usage file share might use 50 I/Os per second per TB), while others have a relatively high I/O density (a VDI server might use 1500 I/Os per second per TB). Not only can VMAX use a mix of drives to support the various workloads, but Fully Automated Storage Tiering (FAST) places the most active data on the drives that have the best I/O density, so that customers can combine these drives to get the most cost effective configurations. Since the introduction of enterprise flash drives for the DMX-4 (2008), flash prices have declined rapidly, making it affordable to use larger percentages of flash in storage designs. With VMX3, many customers are looking at combinations such as 10% flash and 90% 1.2TB 10k drives for a storage mix that is compact, economical, and can support the density needs of their demanding applications.
All VMAX3 engines are connected via the Dynamic Virtual Matrix, built around dual 56Gb InfiniBand switches. As engines are added to the system, additional system bays are attached that can be up to 25m from the first. This new dispersion model allows customers to be much more flexible in the installations, and ends the need of reserving floor space next to existing cabinets for future expansion. In addition to helping customers with crowded data centers, this dispersion will allow customers using colocation facilities to place system bays in separate cages, providing host connections from the array directly in each cage. All VMAX3 systems can also be installed in 3rd party racks, providing customers with a lot of flexibility in how they roll out the systems.
The new VMAX3 systems have many ways to deliver additional value to customers. The new hardware and packaging provide improvements in density, scale, and performance that many customers are excited about deploying.