Customers prefer to spend as little as they need to on infrastructure. One of the tricks is to understand what you really need and where you may be able to cut. With the price of disk capacity dropping each year, customers would love to buy the largest/cheapest drives that they can to support their workload. This results in near-term savings when the gear is purchased, and longer term savings from reduced power, cooling, and maintenance bills. But how large can the drives be before they cannot sustain the business workload?
The Problem
Storage tiering is not a new idea. Back in 2003, EMC started a big campaign to help customers save money by tiering their data. It was called ILM - Information Lifecycle Management. The idea was to help customers put the right data on the right storage at the right time, saving them money. There were only two problems: the tiering placements were manual (like this process example), and to place it correctly you needed to understand what the profile for the data was currently and what it would be in the future.
Many customers decided that the manpower was more expensive than just buying additional hardware 'to be safe.' Sure, you could buy a mix of drive types, but buying all the same drives and spreading the workload across all of them is much easier to plan for. And if you did it right, the workload would push the drives hard (to about 70% busy) about the same time you 'ran out' of space (at 75-90% full). Of course, if the workload turns out to have much different needs, you either waste space (the drives get too busy) or waste money (you bought more performance than you need).
Recent Comments