Welcome!

@CloudExpo Authors: Liz McMillan, Elizabeth White, William Schmarzo, Rene Buest, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Accelerating the Move to Cloud-Based Client Computing

Delivering personality into the standardized desktop is essential

For a long time we have known that corporate use of PCs is inefficient and overly expensive: analysts estimate that a typical PC costs around three times as much as its purchase price to manage over the PC's lifetime. But, until recently, there has been little that organizations could do to change the situation while still delivering acceptable service. Virtualization has changed this in a number of important ways: the physical PC need no longer be the key delivery mechanism and, hence, images can be hosted pretty much anywhere. Essentially, we can host copies of a client operating system and deliver a display protocol to users over the network. However, as with many things, the devil and the opportunity are in the details. Let's look in more detail at how cloud-hosted clients can work today and how changes underway will improve the situation in the future.

Today, we can move desktops into the cloud and manage them in much the same way that we currently manage physical machines. A service provider can build a system to deliver client desktops hosted in the cloud. A customer organization can provide copies of its gold build desktops to the provider, who would replicate them for each users and then allocate an image to every user the first time they connect. From that time on the users will be linked to that same image each time they connect. Behind the scenes, the provider will take care of all of the housekeeping, such as:

  • Storing the virtual machine for the user
  • Delivering users' VMs to a hypervisor running on one of the service provider's servers, and starting, stopping and storing the virtual machine
  • Dealing with issues of authentication and integration with customer systems
  • Managing security of data and communications with the users' virtual machines

The attraction of this system is that it is very similar to the current ways of operating and, therefore, familiar to IT organizations: users own a desktop image, the image is patched with existing tools and, at the end of the virtual machine's life, standard processes can be used for the destruction of the image. Not that implementing such a system is easy. As with any new way of working, there are problems to be solved: in this case, who is going to be responsible for each step of the image lifecycle from creation through patching, maintenance and support to eventual destruction and, ultimately, who takes responsibility for any failures.

The strength of this approach - its similarities to current practices - is also its real weakness: it does not change the model sufficiently enough to really change the economics of client computing. If we look at the details of a solution such as this, on the positive side we see that the customer can benefit from the economies of scale that a cloud provider can bring in terms of offering servers cheaply and, if users are widely dispersed, then the provider's networking strength could deliver a better interactive experience than could be achieved if the customer organization was hosting the servers internally. On the downside, the PC still needs to be managed in much the same way as before. While we have reduced the need for desk-side support, we have added a new layer of administration between the customer organization and the provider. Hence, this solution will play well in situations where there is some additional benefit of moving user desktops to a provider and out of the customer organization, such as numbers of widely dispersed users outside the LAN, but not more generally.

I like to think of the above as a "first-stage" approach to desktop hosting because, while it can work, it does not deliver the level of benefit needed to become a solution for the majority of organizations or users in those organizations. The key to the next, higher-value, solution is to recognize that virtualization is a far more powerful concept than just providing a way to run multiple virtual machines on a server. Because of the isolation that virtualization provides, we can think of it as separating and keeping separate the different components of a user's desktop. Instead of thinking of each user as having a software image and managing that as an individual unique asset, recognize that all the users' images are basically the same with some select differences. In this way, we get to benefit from economies of scale across all that is similar and just manage the differences that make each user think that their machine is "theirs."

How does this work in practice? Each time a user logs on she is given a clean copy of an operating system with a standard set of applications already installed. You can think of this as being similar to the gold image that we might have cloned in a first-stage implementation, except that here we make fresh images every time the user logs on rather than just as a one-time thing. This has the side benefit of making patch management and delivery far simpler and less error-prone. Rather than having to patch each and every user image, many of which may be so far from standard that patching fails, we just patch the gold image - the user will get it next time they log on.

The standard image contains the applications that the organization wants delivered to that user, with the exception of any hosted or streamed applications that are delivered into the image in the normal way. It's important not to confuse this completely standardized image with a desktop that the user would find acceptable and productive. At this stage, the desktop is not configured or personalized for the user. This is acceptable as a one-off occurrence, but would be unacceptable if users had to configure their machines each time they logged on - users would not tolerate the diminished experience and the business would not want users wasting time each day making the machine productive. The key is to be able to set up and personalize the standard machine without the user being aware and without taxing IT organizations and resources. This is known as delivering the "personality" to the machine on-demand. The personality contains everything that makes a machine unique for a user. By managing each personality separately from the underlying operating system and applications, you standardize them while giving users a familiar working environment. A simple way to envision this is to think of the operating system as providing the base layer of client computing with the applications being a layer above, and the personality being the third layer on top. Hence, we see that there are three layers to the virtualized desktop and talk in terms of how each is delivered.

We have mentioned how the operating system will be delivered by making a fresh copy and loading it onto a hypervisor each time the user logs on, and about application delivery, both in terms of installed and hosted or virtualized applications, but we have not talked about the delivery of personality. For the operating system and applications, it's easy to see how virtualization keeps the layers separate so that they can be delivered independently. In order to deliver personality, we must first abstract it from the user environment. Once this is done, the personality can be centrally managed and subsequently delivered back when the user next logs on. One difference between the way that the bottom two layers are managed and the way personality is managed is that the personality data is typically more dynamic than the other standardized layers, reflecting users' continued use and refinement of their environments. The delivery of personality can be effectively handled by a User Environment Management product, which takes care of personality abstraction, management and delivery across all of the application delivery technologies.

The personality contains two different types of information that are necessary to deliver a familiar desktop to users. First are policy items. Policy items consist of all the configuration and setup of the machine that is necessary for it to work in the broader environment. For providers, some of this will include things such as network configurations to work with the provider's infrastructure, but the majority will be customer-specific and will break down into fine-grained detail about how the machine is to work. Examples of policy items include controlling where data is stored, setting up access to particular email servers, and detailed configuration of applications. It also includes the ability to restrict user capabilities when these are not required by the user, for security reasons or for a more general operational requirement.

The second aspect of personality is personalization. Personalization is a myriad of small things that make each user productive and gives them a comfortable, personal place to work. Personalization contains all the changes that users have made to their machines to make them comfortable and productive. In some highly regulated environments, users may not be allowed to personalize their machines, but the majority of enterprise users expect to personalize their working environment. For instance, they expect to be able to make comfort changes such as setting desktop images, having a favorites list in their browser and an IM client that logs on automatically. Productivity personalization covers a very wide range but a representative sample includes the ability to set an email signature block, toolbar positions in applications, language selections, and a variety of preferences across all their applications.

Short term, cloud-delivered desktops fit the "first-stage" model where each user has an image allocated to him once and takes that image forward, much like a traditional PC. However, this model will not deliver sufficient benefits for more general, wide-scale deployment. The key to being able to deliver desktops from the cloud is to make use of the economies of scale that can be achieved by standardizing the deliverables across as many users as possible. That scaling is only possible by taking a component-based view of client computing and assembling those components dynamically for the user. However, in the move to standardize we must remember that we are delivering a product - a user's desktop - that is personal to that user. Delivering personality into the standardized desktop is essential to get user acceptance of cloud-delivered desktops. It is only with "standard plus personality" that we will see real success and adoption of cloud-hosted desktops.

More Stories By Martin Ingram

Martin Ingram is vice president of strategy for AppSense, where he's responsible for understanding where the entire desktop computing market is going and deciding where AppSense should direct its products. He is recognized within the industry as an expert on application delivery. Martin has been with AppSense since 2005, previously having built companies around compliance and security including Kalypton, MIMEsweeper, Baltimore Technologies, Tektronix and Avid. He holds an electrical engineering degree from Sheffield University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
"We want to show that our solution is far less expensive with a much better total cost of ownership so we announced several key features. One is called geo-distributed erasure coding, another is support for KVM and we introduced a new capability called Multi-Part," explained Tim Desai, Senior Product Marketing Manager at Hitachi Data Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Calligo, an innovative cloud service provider offering mid-sized companies the highest levels of data privacy and security, has been named "Bronze Sponsor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Calligo offers unparalleled application performance guarantees, commercial flexibility and a personalised support service from its globally located cloud plat...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We are still a relatively small software house and we are focusing on certain industries like FinTech, med tech, energy and utilities. We help our customers with their digital transformation," noted Piotr Stawinski, Founder and CEO of EARP Integration, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We've been engaging with a lot of customers including Panasonic, we've been involved with Cisco and now we're working with the U.S. government - the Department of Homeland Security," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.