Click here to close now.

Welcome!

Cloud Expo Authors: Plutora Blog, Carmen Gonzalez, Liz McMillan, Gregor Petri, Jason Bloomberg

Related Topics: Virtualization, .NET, Open Source, Cloud Expo, Security, SDN Journal

Virtualization: Blog Feed Post

Doggin’ It with VDI

My IT guys were at their wits end trying to manage the 1000 or so desktops, spread out over four different locations

Hey there; the IT Dog back with some color commentary on our VDI experience. When I first heard the name, I thought VDI was the latest model Volkswagen diesel, but as the guy with the suit explained in the last blog, VDI is Virtual Desktop Infrastructure.  Now that you know what VDI means, I am sure you know all there is to know about it, right?  Well, I wish it was that easy for me.

Where Do I Begin?
I guess I’ll start from the beginning.  At my company, my IT guys were at their wits end trying to manage the 1000 or so desktops, spread out over four different locations, with various flavors of Windows and who knows how many different versions of applications and other personal stuff.  I kept getting the ‘we need more staff to manage this mess’ line from them.  A lot of these problems had to do with acquisition and expanding business – which is all good.

Queue the VDI sales guy with the big Mercedes: “I’ve got just the thing to solve your problems: VDI”.  Let’s see…take control of the company computing assets all from the back room, what a great idea!  No more tech support phone calls, no more sending staff out to offices to get chewed out because some website they were on loaded some garbage on to their machine and now it runs slower than a weenie dog in a foot of snow.  All this ‘problem solving’ was going to cost a bundle however and I was the guy who had to sell management on it.  With my tail on the line, we bit the bone and put the system in.

I Wish My Problems Were “Virtual”
We were at the bleeding edge of the VDI wave so we expected some start up and implementation issues.  Our vendor helped us specify and design a system to meet the performance requirements and within the budget we had sold management on.  We installed racks of new servers, more spinning disks than at the Frisbee Dog World Championships, power, cooling, wires, wires and more wires.   We had it all going on.  I spent a month going around selling all the end users on this saying their lives were going to be better – no more sitting on hold, waiting for support, the latest and greatest applications, easy access anytime, anywhere with the potential to support any device in the future, yadda, yadda, yadda.  We went live a few months later and that is and began to observe performance.

“Houston, We Have A Problem”
It did not take long to find out about some of the potential issues facing VDI installations today.  The first problem we had to deal with had to do with simply getting all the users up and running every morning. I learned about the dreaded “boot storm.”  Of course I had no idea what a boot storm was until we started this project.  I thought it referred to something from Nazi Germany.  But there it is, we have a boot storm problem – when lots of users try to start up their machines at the same time, it puts a tremendous load on the VDI hardware and network and all users suffer from poor service and slow startups.  I have to admit – it happened to me also and as you know, being a dog, my life is too short to be waiting around for things like that.

It turns out we designed a system for a typical day in the office for our 1000+ users.  What we did not do was design the system for  that would be responsive for a “100 year” type of event like loading 400 user images at the same time.  Basically our system was 90% perfect but the last 10% was really causing problems for the company.  The feedback I was getting was pretty tough to take.  I felt like I just pooped on the carpet.

Getting to 100%
As you remember from your Econ 101 class, there is a bell curve distribution for just about everything and IT system usage fits that model pretty well.  I went back to our vendor to discuss what it would take to get that last 10% of performance to manage the “100 year boot storm event” (really it was every day) and it turns out this is a very common problem with VDI. You see, the main challenge is that if you build your VDI for the average 'steady state' IOPS usage (the 90% system), you can do it cost-effectively but then performance is inadequate during the usage storms. It turns out the problem is the 90% system is limited in the number of IO operations per second (IOPS) it can handle.  One solution to the IOPS problem is to scale up by adding more disks to size IOPS for the peak usage.  The problem with that is then your system becomes 2x the total price of all the 1,000 PC and, all that extra IOPS you just bought sits idle most of the time and the capacity is unused.  Since I put my tail on the line for this system, my bosses promptly cut it off and I was on the hook to fix this problem with limited resources.

Stop the Spinning
We needed to get creative to solve the IOPS problem.  The solution we came up with was to buy a limited quantity of SSDs and use SSD caching software to reduce the IOPS workload on the spinning disks.  This made sense since things like the PC image which needed to be accessed by all users when they start their system can easily be stored in SSD cache and accessing that from SSD will increase IOPS without adding more spinning disks.  Once the boot storm passed every morning, the SSD caching software would recognize that and start caching other heavily accessed data automatically so system performance would be improved all day.  We solved our immediate problem and were able to focus on other VDI related management issues.


Tell me about your experience rolling out VDI.

Read the original blog entry...

More Stories By Peter Velikin

Peter Velikin has 12 years of experience creating new markets and commercializing products in multiple high tech industries. Prior to VeloBit, he was VP Marketing at Zmags, a SaaS-based digital content platform for e-commerce and mobile devices, where he managed all aspects of marketing, product management, and business development. Prior to that, Peter was Director of Product and Market Strategy at PTC, responsible for PTC’s publishing, content management, and services solutions. Prior to PTC, Peter was at EMC Corporation, where he held roles in product management, business development, and engineering program management.

Peter has an MS in Electrical Engineering from Boston University and an MBA from Harvard Business School.

@CloudExpo Stories
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing ...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add sc...
Business and IT leaders today need better application delivery capabilities to support critical new innovation. But how often do you hear objections to improving application delivery like, “I can harden it against attack, but not on this timeline”; “I can make it better, but it will cost more”; “I can deliver faster, but not with these specs”; or “I can stay strong on cost control, but quality will suffer”? In the new application economy, these tradeoffs are no longer acceptable. Customers will ...
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
The excitement around the possibilities enabled by Big Data is being tempered by the daunting task of feeding the analytics engines with high quality data on a continuous basis. As the once distinct fields of data integration and data management increasingly converge, cloud-based data solutions providers have emerged that can buffer your organization from the complexities of this continuous data cleansing and management so that you’re free to focus on the end goal: actionable insight.
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will disc...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to mak...
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics arc...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS soluti...
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investme...
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use c...
Platform-as-a-Service (PaaS) is a technology designed to make DevOps easier and allow developers to focus on application development. The PaaS takes care of provisioning, scaling, HA, and other cloud management aspects. Apache Stratos is a PaaS codebase developed in Apache and designed to create a highly productive developer environment while also supporting powerful deployment options. Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system ...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, will analyze a range of cloud offerings (IaaS, PaaS, SaaS) and discuss the benefits/challenges of migrating to each of...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance...
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch ...
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...