Welcome!

@CloudExpo Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan, Matt Brickey

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

Considerations for SSD Deployments

SSD is a great technology, but your best value from it will come when you deploy it most efficiently

Legacy storage architectures do not perform very efficiently in virtual computing environments. The very random, very write-intensive I/O patterns generated by virtual hosts drive storage costs up as enterprises either add spindles or look to newer storage technologies like solid state disk (SSD) to address the IOPS shortfall.

SSD costs are coming down, but they are still significantly higher than spinning disk costs. When enterprises do consider SSD, how it is used and where it is placed in the virtual infrastructure can make a big difference in how much enterprises have to spend to meet their performance requirements. It can also impose certain operational limitations that may or may not be issues in specific environments.

Some of the key considerations that need to be taken into account are SSD placement (in the host or in the SAN), high availability/failover requirements, caching vs logging architectures, and the value of preserving existing investments vs rip and replace investments that promise storage hardware specifically designed for virtual environments.

SSD Placement
There are two basic locations to place SSD, each of which offers its own pros and cons. Host-based SSD will generally offer the lowest storage latencies, particularly if the SSD is located on PCIe cards. In non-clustered environments where it is clear that IOPS and storage latencies are the key performance problems, these types of devices can be very valuable. In most cases, they will remove storage as the performance problem.

But don't necessarily expect that in your environment, these devices will deliver their rated IOPS directly to your applications. In removing storage as the bottleneck, system performance will now be determined by whatever the next bottleneck in the system is. That could be CPU, memory, operating system, or any number of other potential issues. This phenomenon is referred to as Amdahl's Law.

What you probably care about are application IOPS. Test the devices you're considering in your environment before purchase, so you know exactly the level of performance gain they will provide to you. Then you can make a more informed decision about whether or not you can cost justify them for use with your workloads. Paying for performance you can't use is like buying a Ferrari for use on America's interstate system - you may never get out of second gear.

Raw SSD technology generally can provide blazingly fast read performance. Write performance, however, varies depending on whether you are writing randomly or sequentially. The raw technical specs on many SSD devices indicate that sequential write performance may be half that of read performance, and random write performance may be half again as slow. Write latencies may also not be deterministic because of how SSD devices manage the space they are writing to. Many SSD vendors are combining software and other infrastructure around their SSD devices to address some of these issues. If you're looking at SSD, look to the software it's packaged with to make sure the SSD capacity you're buying can be used most efficiently.

Host-based SSD introduces failover limitations. If you have implemented a product like VMware HA in your environment to automatically recover failed nodes, any data sitting in a host-based SSD device that has not been written through to shared storage will not be available on recovery. This can lead to data loss on recovery - something that may or may not be an issue in your environment. Even though SSD is non-volatile storage, if the node it is sitting in is down, you can't get to it. You can get to it after that node is recovered, but the issue here is whether or not you can automatically fail over and have access to it.

Because of this issue, most host-based SSD products implement what is called a "write-through" cache, which means that they don't acknowledge writes at SSD latencies, they actually write them through to shared disk and then send the write acknowledgement back from there. Anything on shared disk can be potentially recovered by any other node in the cluster, ensuring that no committed data is unavailable on failover. But what this means is that you won't get any write performance improvements from SSD, just better read performance.

What does your workload look like in terms of read vs write percentages? Most virtual environments are very write intensive, much more so than they ever were in physical environments, and virtual desktop infrastructure (VDI) environments can be as much as 90% writes when operating in steady state mode. If write performance is your problem, host-based SSD with a write-through cache may not help very much in the big picture.

SAN-based SSD, on the other hand, can support failover without data loss, and if implemented with a write-back cache can provide write performance speedups as well. But many implementations available for use with SAN arrays are really only designed to speed up reads. Check carefully as you consider SSD to understand how it is implemented, and how well that maps to the actual performance requirements in your environment.

Caching vs Logging Architectures
Most SSD, wherever it is implemented, is used as a cache. Sizing guidelines for caches start with the cache as a percentage of the back-end storage it is front-ending. Generally the cache needs to be somewhere between 3% to 6% of the back-end storage, so larger data store capacities require larger caches. For example, 20TB of back-end data might require 1TB of SSD cache (5%).

Caches are generally just speeding up reads, but if you are working with a write-back cache, then the cache will have to be split between SSD capacity used to speed up reads and SSD capacity used to speed up writes. Everything else being equal in terms of performance requirements, write-back caches will have to be larger than write-through caches, but will provide more balanced performance gains (across both reads and writes).

Logging architectures, by definition, speed up writes, making them a good fit for write-intensive workloads like those found in virtual computing environments. Logs provide write performance gains by taking the very random workload and essentially removing the randomness from it by writing it sequentially to a log, acknowledging the writes from there, then asynchronously de-staging them to a shared storage pool. This means that the same SSD device used in a log vs used in a cache will be faster, assuming some randomness to the workload. The write performance the guest VMs see is the performance of the log device operating in sequential write mode almost all the time, and it can result in write performance improvements of up to 10x (relative to that same device operating in the random mode it would normally be operating in). And a log provides write performance improvements for all writes from all VMs all the time. (What's also interesting is that if you are getting 10x the IOPS from your current spinning disk, given Amdahl's Law, you may not even need to purchase SSD to remove storage as the performance bottleneck.)

Logs are very small (10GB or so) and are dedicated to a host, while the shared storage pool is accessible to all nodes in a cluster and primarily handles read requests. In a 20 node cluster with 20TB of shared data, you would need 200GB for the logs (10GB x 20 hosts) vs the 1TB you would need if SSD was used as a cache. Logs are much more efficient than caches for write performance improvements, resulting in lower costs.

If logs are located on SAN-based SSD, you not only get the write performance improvements, but this design fully supports node failover without data loss, a very nice differentiator from write-through cache implementations.

But what about read performance? This is where caches excel, and a write log doesn't seem to address that. That's true, and why it's important to combine a logging architecture with storage tiering. Any SSD capacity not used by the logs can be configured into a fast tier 0, which will provide the read performance improvements for any data residing in that tier. The bottom line here is that you can get better overall storage performance improvements from a "log + tiering" design than you can from a cache design while using 50% - 90% less high performance device (in this case, SSD) capacity. In our example above, if you buy a 256GB SAN-based SSD device and use it in a 20 node cluster, you'll get SSD sequential write performance for every write all the time, and have 56GB left over to put into a tier 0. Compare that to buying 1TB+ of cache capacity at SSD prices.

With single image management technology like linked clones or other similar implementations, you can lock your VM templates into this tier, and very efficiently gain read performance improvements against the shared blocks in those templates for all child VMs all the time. Single image management technology can help make the use of SSD capacity more efficient in either a cache or a log architecture, so don't overlook it as long as it is implemented in a way that does not impinge upon your storage performance.

Purpose-Built Storage Hardware
There are some interesting new array designs that leverage SSD, sometimes in combination with some of the other technologies mentioned above (log architectures, storage tiering, single image manage-ment, spinning disk). Designed specifically with the storage performance issues in virtual environments in mind, there is no doubt that these arrays can outperform legacy arrays. But for most enterprises, that may not be the operative question.

It's rare that an enterprise doesn't already have a sizable investment in storage. Many of these existing arrays support SSD, which can be deployed in a SAN-based cache or fast tier. It's much easier, and potentially much less disruptive and expensive if existing storage investments could be leveraged to address the storage performance issues in virtual environments. It's also less risky, since most of the hot new "virtual computing-aware" arrays and appliances are built by startups, not proven vendors. If there are pure software-based options to consider that support heterogeneous storage hardware and can address the storage issues common in virtual computing environments, allowing you to potentially take advantage of SSD capacity that fits into your current arrays, this could be a simpler, more cost-effective, and less risky option than buying from a storage startup. But only, of course, if it adequately resolves your performance problem.

The Take-Away
If there's one point you should take away from this article, it's that just blindly throwing SSD at a storage performance problem in virtual computing environments is not going to be a very efficient or cost-effective way to address your particular issues. Consider how much more performance you need, whether you need it on reads, writes, or both, whether you need to failover without data loss, and whether preserving existing storage hardware investments is important to you. SSD is a great technology, but your best value from it will come when you deploy it most efficiently.

More Stories By Eric Burgener

Eric Burgener is vice president product management at Virsto Software. He has worked on emerging technologies for almost his entire career, with early stints at pioneering companies such as Tandem, Pyramid, Sun, Veritas, ConvergeNet, Mendocino, and Topio, among others, on fault tolerance and high availability, replication, backup, continuous data protection, and server virtualization technologies.

Over the last 25 years Eric has worked across a variety of functional areas, including sales, product management, marketing, business development, and technical support, and also spent time as an Executive in Residence with Mayfield and a storage industry analyst at Taneja Group. Before joining Virsto, he was VP of Marketing at InMage.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Calligo, an innovative cloud service provider offering mid-sized companies the highest levels of data privacy and security, has been named "Bronze Sponsor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Calligo offers unparalleled application performance guarantees, commercial flexibility and a personalised support service from its globally located cloud plat...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are still a relatively small software house and we are focusing on certain industries like FinTech, med tech, energy and utilities. We help our customers with their digital transformation," noted Piotr Stawinski, Founder and CEO of EARP Integration, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We've been engaging with a lot of customers including Panasonic, we've been involved with Cisco and now we're working with the U.S. government - the Department of Homeland Security," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are an IT services solution provider and we sell software to support those solutions. Our focus and key areas are around security, enterprise monitoring, and continuous delivery optimization," noted John Balsavage, President of A&I Solutions, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We want to show that our solution is far less expensive with a much better total cost of ownership so we announced several key features. One is called geo-distributed erasure coding, another is support for KVM and we introduced a new capability called Multi-Part," explained Tim Desai, Senior Product Marketing Manager at Hitachi Data Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...