Welcome!

@CloudExpo Authors: Elizabeth White, Jyoti Bansal, Carmen Gonzalez, Kong Yang, Yeshim Deniz

Related Topics: Containers Expo Blog, Microsoft Cloud, Open Source Cloud, @CloudExpo, Cloud Security, SDN Journal

Containers Expo Blog: Article

Can You Trust VDI Storage Benchmarks?

The truth behind VDI benchmarks

by George Crump, Storage Switzerland

VDI (Virtual Desktop Infrastructure) implementation projects are going to be priorities for many IT Managers in 2013 and a key concern will be end-user acceptance. If the users don't embrace their virtual desktops they won't use them and the project is doomed to failure. The key to acceptance is to provide users with an environment that feels the same, performs better and is more reliable than their current stand-alone system. The storage system bears most of the responsibility in delivering that experience.

IT managers who want to capitalize on the opportunity that the virtual desktop environment can focus on two key capabilities when they evaluate storage system vendors. The first is being able to deliver the raw performance that the virtual desktop architecture needs and the second is doing so in the most cost effective way possible. These are two capabilities that are traditionally at odds with each other and not always well-reflected in benchmark testing.

For most organizations the number-one priority for gaining user acceptance is to keep the virtual desktop experience as similar to the physical desktop as possible. Typically, this will mean using persistent desktops, a VDI implementation in which each user's desktop is a stand-alone element in the virtual environment for which they can customize settings and add their own applications just like they could on their physical desktop.

The problem with persistent desktops is that a unique image is created for each desktop or user, which can add up to thousands of images for larger VDI populations. Obviously, allocating storage for thousands of virtual desktops is a high price to pay for maintaining a positive user experience.

In an effort to reduce the amount of storage required for all of these images, virtualized environments have incorporate features such as thin provisioning and linked clones. The goal is to have the storage system deliver a VDI environment that's built from just a few thinly provisioned ‘golden' VDI images, which are then cloned for each user.

As users customize their clones, only the differences between the golden image and the users' VDIs need to be stored. The result is a significant reduction in the total amount of storage required, lowering its overall cost. Also, the small number of golden images allows for much of the VDI read traffic to be served from a flash-based tier or cache.

When a write occurs from a thinly provisioned, cloned virtual desktop more has to happen then just the operation to write that data object. The volume needs to have additional space allocated to it (one write operation), the metadata table that tracks unique branches of the cloned volume has to be updated (another write operation) and some sort of parity data needs to be written, depending on the RAID protection in place. Then, finally, the data object is written. This entire process has to happen with each data change no matter how small.

Herein lays the tradeoff in using these features. While reducing the amount of space required for the VDI images, thin provisioning and cloning increase the demand for high write performance in the storage system. This presents a significant opportunity for storage system vendors who can address these new performance requirements.

Many storage systems that use a mix of flash memory and hard disk technology don't use the higher performing flash for writes; they use it for actively reading data. While these storage systems have storage controllers designed to handle high read loads, the increased write activity generated by thin provisioning and cloning is still going to relatively slow hard disk drives. Because this type of I/O traffic is highly random, the hard drives are constantly "thrashing about". Basically the controller sits idle while it waits for the hard disk to rotate into position to complete each write command. Even systems with an SSD tier or cache may have problems providing adequate performance because they too don't leverage the high speed flash for write traffic.

Due to the high level of thin provisioning and cloning, plus the fact that once a desktop is created a large part of its I/O is write traffic, many cached or tiered systems do not perform well in real-world VDI environments and can provide misleading VDI Benchmark scores.

The Truth Behind VDI Benchmarks
Most VDI Benchmarks focus primarily on one aspect of the VDI experience, the time it takes to boot a given number of virtual desktops. The problem with using a "boot storm test" is that this important but read-heavy event is only a part of the overall VDI storage challenge. During most of the day desktops are writing data, not reading it. In addition, simple activities such as logging out and application updates are very write-intensive. The capability of a storage system to handle these write activities is not measured by many VDI benchmarking routines.

A second problem with many VDI benchmarking claims is that for their testing configuration they do not use thinly provisioned and cloned volumes. Instead, they use thick volumes in order to show maximum VDI performance.

As discussed above, in order to keep user adoption high and costs low most VDI implementations would preferentially use persistent desktops with thin provisioning and cloning. Be wary of vendors claiming a single device can support over 1000 VDI users. These claims are usually based on the amount of storage that a typical VDI user might need as opposed to the Read/Write IOPS performance they will most likely need.

Trustworthy VDI Performance
A successful VDI project is one that gains end-user acceptance while reducing desktop support costs. The cost of a storage system that can provide thin provisioning, cloning and an adequate sized flash storage area to support the virtual environment could be too high for some enterprises to afford.  And, an additional cost could be incurred with the performance problems that are likely to appear after the initial desktop boot is completed because of the high level of write I/O.

The simplest solution may be to deploy a solid state appliance like Astute Networks ViSX for VDI. These devices are 100% solid state storage to provide high performance on both reads AND writes. This means that boot performance is excellent and performance throughout the day is maintained as well.

With a solid state based solution to the above problems, performance will not be an issue, but cost may still be. Even though it can provide consistent read/write performance throughout the day for a given number of virtual desktops, the cost per desktop of a flash based solution can be significantly higher than a hard drive based system.

However, it's likely in larger VDI environments (400+ users) that flash-based systems are really the only viable alternative to meet the performance requirements which can easily exceed 100 IOPS per user. Fortunately, flash-based systems can also produce efficiencies that bring down that cost in addition to the well-known benefits of using 1/10th the floor space, power and cooling compared to traditional storage systems.

First, the density of virtual desktops per host can be significantly higher with a flash appliance. And, the system is unaffected by the increase in random I/O as the density of virtual machines increases.

Second, the speed of the storage device compensates for the increased demands of thin provisioning and cloning operations run on the hypervisor. These data reduction services can now be used without a performance penalty. This means that the cost of a storage system with a more powerful storage controller and expensive data services like thin provisioning and cloning can be avoided.

Finally, the flash appliance is designed to tap into more of the full potential of solid state-based storage. For example, Astute uses a unique DataPump Engine protocol processor that's designed to specifically accelerate data onto and off of the network and through the appliance to the fast flash storage. This lowers the cost per IOPS compared to other flash-based storage systems.

Most legacy storage systems use traditional networking components and get nowhere near the full potential of flash. In short, the appliance can deliver better performance with the same amount of flash memory space. This leads to further increases in virtual machine density and space efficiency because more clones can be made - resulting in very low cost per VDI user.

Conclusion

VDI benchmark data can be useful but the test itself must be analyzed. Users should look for tests that not only focus on boot performance but also performance throughout the day, and at the end of the day. If systems with a mix of flash and HDD are used then enough flash must be purchased to avoid a cache miss, since these systems rarely have enough disk spindles to provide adequate secondary performance.

A simpler and better performing solution may be to use a solid state appliance like those available from Astute Networks. These allow for consistent, high performance throughout the day at a cost per IOPS that hybrid and traditional storage vendors can't match. Their enablement of the built-in hypervisor capabilities, like thin provisioning, cloning and snapshots, also means that they can be deployed very cost effectively.

>

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

More Stories By Derek Kol

Derek Kol is a technology specialist focused on SMB and enterprise IT innovations.

@CloudExpo Stories
The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust ...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Cloud Expo, Inc. has announced today that Aruna Ravichandran, vice president of DevOps Product and Solutions Marketing at CA Technologies, has been named co-conference chair of DevOps at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
SYS-CON Events announced today that Twistlock, the leading provider of cloud container security solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Twistlock is the industry's first enterprise security suite for container security. Twistlock's technology addresses risks on the host and within the application of the container, enabling enterprises to consistently enforce security policies, monitor...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across supply chain networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost and time for product recall as well as advance trade. Are you curious about Blockchain and how it can provide you with new opportunities for innovation and growth? In her session at 20th Cloud Exp...
@ThingsExpo has been named the Most Influential ‘Smart Cities - IIoT' Account and @BigDataExpo has been named fourteenth by Right Relevance (RR), which provides curated information and intelligence on approximately 50,000 topics. In addition, Right Relevance provides an Insights offering that combines the above Topics and Influencers information with real time conversations to provide actionable intelligence with visualizations to enable decision making. The Insights service is applicable to eve...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
SYS-CON Events announced today that Hitachi, the leading provider the Internet of Things and Digital Transformation, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Hitachi Data Systems, a wholly owned subsidiary of Hitachi, Ltd., offers an integrated portfolio of services and solutions that enable digital transformation through enhanced data management, governance, mobility and analytics. We help globa...
SYS-CON Events announced today that Juniper Networks (NYSE: JNPR), an industry leader in automated, scalable and secure networks, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Juniper Networks challenges the status quo with products, solutions and services that transform the economics of networking. The company co-innovates with customers and partners to deliver automated, scalable and secure network...
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and 21st International Cloud Expo, which will take place in November in Silicon Valley, California.
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
Back in February of 2017, Andrew Clay Schafer of Pivotal tweeted the following: “seriously tho, the whole software industry is stuck on deployment when we desperately need architecture and telemetry.” Intrigue in a 140 characters. For me, I hear Andrew saying, “we’re jumping to step 5 before we’ve successfully completed steps 1-4.”
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTred processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
As pervasive as cloud technology is -- and as persuasive as the arguments are for using it -- the cloud has its limits. Some companies will always have security concerns about storing data in the cloud and certain high-transaction applications will always be better suited for on-premises storage. Those statements were among the bottom-line takeaways delivered at Cloud Expo this week, a three day, bi-annual event focused on cloud technologies, adoption and associated challenges.
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
Quickly find the root cause of complex database problems slowing down your applications. Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing - in four clicks or less. Optimize performance anywhere. Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.