The Dirty Secret about SSDs

July 17, 2019

I recently ran across a staggering statistic from McAfee. The average enterprise environment is running 1,427 unique cloud services! Now, it stands to reason that each of those workloads has different demands on compute and storage. Workloads must vary somewhat, if not to a large degree, from one application to another. How does a data center administrator or storage architect manage all of those workloads today, let alone plan for the dozens of new applications that will be added in the future?

The people we talk to have varied approaches to managing the complexity of their cloud environments today but, too often, managing that complexity boils down to a “big hammer” approach of throwing more hardware at the issue to ensure performance and to meet their SLAs. Truly, when it comes to flash storage in general, and SSDs in particular, that has been the only option. Write performance slow? Add more flash (overprovisioning). Need longer drive life? Add more flash. Inconsistent latency… You get the picture.

Despite their shortcomings, SSDs are taking over the data center and storage in general, as well they should. The performance gains realized by replacing HDDs with SATA or SAS SSDs are well documented, but applications that hunger for more performance quickly caught up. Enter NVMe to take advantage of the PCIe bus and greatly reduced latency to save the day. Again, we are promised tremendous performance gains; and, again, applications will catch up sooner than any of us expect.

All of this means that the time, effort, and expense put into qualifying a storage solution today will likely need to be repeated. As the solution buckles under the load of new and ever more complex applications, it begins to under-perform or be problematic in some other way, well before the next hardware refresh cycle. In that case, the substantial investment that was made in storage likely means that you are stuck with those performance issues for a while – or get out that big hammer. This reminds me of another statistic from a recent poll from 451 Research – 45% of people who have transitioned to flash storage are unhappy with their solution. That’s a remarkable number considering the pace at which flash is being adopted.

The dirty secret is that commercially available SSDs lack any ability to adapt to changing workloads and they offer no tuning capabilities beyond variable overprovisioning. Many users find out too late that their chosen SSDs are a poor fit for their actual workloads. In some cases, qualification was done with synthetic benchmarks that do not resemble the real workload. In other cases, applications change and the workload profile is different than what was qualified. Regardless of the reason for their dissatisfaction, too many users are getting stuck with flash storage that is not meeting expectations and going to great lengths to compensate, in many cases spending millions of dollars developing middleware or simply buying more hardware to make up for performance shortfall.

There is a solution. With TrueFlash software-defined flash, we analyze customers’ real workloads at the flash layer, tune the software to perfectly match the application, and deliver a flash storage solution that provides better performance, longer drive life, and reduced TCO. Better still, as applications evolve and workloads change, re-tuning the TrueFlash software is as simple as a firmware update with no disruption to the data center.

If you would like to gain more insight into how your application workloads are impacting your storage performance or would like to learn more about the next wave of flash storage, reach out, we would be happy to discuss how we can help.


,