Welcome!

Cloud Expo Authors: Roger Strukhoff, Elizabeth White, Pat Romanski, Liz McMillan, Jason Bloomberg

Related Topics: Virtualization, .NET, Open Source, Cloud Expo, Security, SDN Journal

Virtualization: Article

Can You Trust VDI Storage Benchmarks?

The truth behind VDI benchmarks

by George Crump, Storage Switzerland

VDI (Virtual Desktop Infrastructure) implementation projects are going to be priorities for many IT Managers in 2013 and a key concern will be end-user acceptance. If the users don't embrace their virtual desktops they won't use them and the project is doomed to failure. The key to acceptance is to provide users with an environment that feels the same, performs better and is more reliable than their current stand-alone system. The storage system bears most of the responsibility in delivering that experience.

IT managers who want to capitalize on the opportunity that the virtual desktop environment can focus on two key capabilities when they evaluate storage system vendors. The first is being able to deliver the raw performance that the virtual desktop architecture needs and the second is doing so in the most cost effective way possible. These are two capabilities that are traditionally at odds with each other and not always well-reflected in benchmark testing.

For most organizations the number-one priority for gaining user acceptance is to keep the virtual desktop experience as similar to the physical desktop as possible. Typically, this will mean using persistent desktops, a VDI implementation in which each user's desktop is a stand-alone element in the virtual environment for which they can customize settings and add their own applications just like they could on their physical desktop.

The problem with persistent desktops is that a unique image is created for each desktop or user, which can add up to thousands of images for larger VDI populations. Obviously, allocating storage for thousands of virtual desktops is a high price to pay for maintaining a positive user experience.

In an effort to reduce the amount of storage required for all of these images, virtualized environments have incorporate features such as thin provisioning and linked clones. The goal is to have the storage system deliver a VDI environment that's built from just a few thinly provisioned ‘golden' VDI images, which are then cloned for each user.

As users customize their clones, only the differences between the golden image and the users' VDIs need to be stored. The result is a significant reduction in the total amount of storage required, lowering its overall cost. Also, the small number of golden images allows for much of the VDI read traffic to be served from a flash-based tier or cache.

When a write occurs from a thinly provisioned, cloned virtual desktop more has to happen then just the operation to write that data object. The volume needs to have additional space allocated to it (one write operation), the metadata table that tracks unique branches of the cloned volume has to be updated (another write operation) and some sort of parity data needs to be written, depending on the RAID protection in place. Then, finally, the data object is written. This entire process has to happen with each data change no matter how small.

Herein lays the tradeoff in using these features. While reducing the amount of space required for the VDI images, thin provisioning and cloning increase the demand for high write performance in the storage system. This presents a significant opportunity for storage system vendors who can address these new performance requirements.

Many storage systems that use a mix of flash memory and hard disk technology don't use the higher performing flash for writes; they use it for actively reading data. While these storage systems have storage controllers designed to handle high read loads, the increased write activity generated by thin provisioning and cloning is still going to relatively slow hard disk drives. Because this type of I/O traffic is highly random, the hard drives are constantly "thrashing about". Basically the controller sits idle while it waits for the hard disk to rotate into position to complete each write command. Even systems with an SSD tier or cache may have problems providing adequate performance because they too don't leverage the high speed flash for write traffic.

Due to the high level of thin provisioning and cloning, plus the fact that once a desktop is created a large part of its I/O is write traffic, many cached or tiered systems do not perform well in real-world VDI environments and can provide misleading VDI Benchmark scores.

The Truth Behind VDI Benchmarks
Most VDI Benchmarks focus primarily on one aspect of the VDI experience, the time it takes to boot a given number of virtual desktops. The problem with using a "boot storm test" is that this important but read-heavy event is only a part of the overall VDI storage challenge. During most of the day desktops are writing data, not reading it. In addition, simple activities such as logging out and application updates are very write-intensive. The capability of a storage system to handle these write activities is not measured by many VDI benchmarking routines.

A second problem with many VDI benchmarking claims is that for their testing configuration they do not use thinly provisioned and cloned volumes. Instead, they use thick volumes in order to show maximum VDI performance.

As discussed above, in order to keep user adoption high and costs low most VDI implementations would preferentially use persistent desktops with thin provisioning and cloning. Be wary of vendors claiming a single device can support over 1000 VDI users. These claims are usually based on the amount of storage that a typical VDI user might need as opposed to the Read/Write IOPS performance they will most likely need.

Trustworthy VDI Performance
A successful VDI project is one that gains end-user acceptance while reducing desktop support costs. The cost of a storage system that can provide thin provisioning, cloning and an adequate sized flash storage area to support the virtual environment could be too high for some enterprises to afford.  And, an additional cost could be incurred with the performance problems that are likely to appear after the initial desktop boot is completed because of the high level of write I/O.

The simplest solution may be to deploy a solid state appliance like Astute Networks ViSX for VDI. These devices are 100% solid state storage to provide high performance on both reads AND writes. This means that boot performance is excellent and performance throughout the day is maintained as well.

With a solid state based solution to the above problems, performance will not be an issue, but cost may still be. Even though it can provide consistent read/write performance throughout the day for a given number of virtual desktops, the cost per desktop of a flash based solution can be significantly higher than a hard drive based system.

However, it's likely in larger VDI environments (400+ users) that flash-based systems are really the only viable alternative to meet the performance requirements which can easily exceed 100 IOPS per user. Fortunately, flash-based systems can also produce efficiencies that bring down that cost in addition to the well-known benefits of using 1/10th the floor space, power and cooling compared to traditional storage systems.

First, the density of virtual desktops per host can be significantly higher with a flash appliance. And, the system is unaffected by the increase in random I/O as the density of virtual machines increases.

Second, the speed of the storage device compensates for the increased demands of thin provisioning and cloning operations run on the hypervisor. These data reduction services can now be used without a performance penalty. This means that the cost of a storage system with a more powerful storage controller and expensive data services like thin provisioning and cloning can be avoided.

Finally, the flash appliance is designed to tap into more of the full potential of solid state-based storage. For example, Astute uses a unique DataPump Engine protocol processor that's designed to specifically accelerate data onto and off of the network and through the appliance to the fast flash storage. This lowers the cost per IOPS compared to other flash-based storage systems.

Most legacy storage systems use traditional networking components and get nowhere near the full potential of flash. In short, the appliance can deliver better performance with the same amount of flash memory space. This leads to further increases in virtual machine density and space efficiency because more clones can be made - resulting in very low cost per VDI user.

Conclusion

VDI benchmark data can be useful but the test itself must be analyzed. Users should look for tests that not only focus on boot performance but also performance throughout the day, and at the end of the day. If systems with a mix of flash and HDD are used then enough flash must be purchased to avoid a cache miss, since these systems rarely have enough disk spindles to provide adequate secondary performance.

A simpler and better performing solution may be to use a solid state appliance like those available from Astute Networks. These allow for consistent, high performance throughout the day at a cost per IOPS that hybrid and traditional storage vendors can't match. Their enablement of the built-in hypervisor capabilities, like thin provisioning, cloning and snapshots, also means that they can be deployed very cost effectively.

>

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

More Stories By Derek Kol

Derek Kol is a technology specialist focused on SMB and enterprise IT innovations.

Cloud Expo Latest Stories
The emergence of cloud computing and Big Data warrants a greater role for the PMO to successfully manage enterprise transformation driven by these powerful trends. As the adoption of cloud-based services continues to grow, a governance model is needed to orchestrate enterprise cloud implementations and harness the power of Big Data analytics. In his session at 15th Cloud Expo, Mahesh Singh, President of BigData, Inc., to discuss how the Enterprise PMO takes center stage not only in developing the appropriate governance model but also in collaborating with key stakeholders to ensure a successful transformation.
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss this shifting dynamic with an example of a top European Telco provider. Find out how they are leveraging the power of acloud-based consumption model services to offer more value to the mass market and enable a new revenue model that embraces the true meaning of the Third Industrial Revolution.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.