Welcome!

@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Zakia Bouachraoui, Carmen Gonzalez

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Do VMs Still Matter in the Cloud?

Virtual Machines encompass “virtual hardware” and very real operating systems

By John Considine

There’s a long running debate about the true role of Virtual Machines (VMs) in cloud computing.  In talking with CTOs at the large vendors as well as the “Clouderati” over the last two years, there seems to be the desire to eliminate the VM from cloud computing.  A colleague of mine, Simeon Simeonov, wrote a blog a couple of weeks ago that made the case for eliminating the VM.  While the argument is appealing, and there is growing support for the idea, I’d like to argue that there are compelling reasons to keep the Virtual Machine as the core of cloud computing.

Virtual Machines encompass “virtual hardware” and very real operating systems.  VMs drive the economics and flexibility of the cloud by allowing complete servers to be created on-demand and in many cases share the same physical hardware.  The virtual machines provide a complete environment for applications to run – just like they would on their own individual server, including both the hardware and operating system.

Sim and other cloud evangelists would like to see applications developed independent of the underlying operating systems and hardware. Implied in this argument is that developers shouldn’t be constrained anymore by an “outdated” VM construct, but should design from scratch for the cloud and its horizontal scalability. This reminds me of early conversations I had when we were just starting CloudSwitch that went something like: “If you just design your applications to be stateless, fault tolerant, and horizontally scalable, then you can run them in the cloud.”  The message seemed to be that if you do all of the work to make your applications cloud-like, they will run great in the cloud.  The motivation is cost savings, flexibility, and almost infinite scalability, and the cost is redesigning everything around the limitations and architectures offered by the cloud providers.

But why should we require everyone to adapt to the cloud instead of adapting the cloud to the users?  Amazon’s EC2 was the very first “public cloud” and it was designed with some really strange attributes that were driven from a combination of technology choices and a web-centric view of the world.  We ended up with notions of “ephemeral storage” and effectively random IP address assignment as well as being told that the servers can and will fail without notice or remediation.  These properties would never work in an enterprise datacenter; I can’t imagine anyone proposing them, much less a company implementing them.

But somehow, and this is what disruption is really about, it was OK for Amazon to offer this because the users would adjust to the limitations.  The process began with customers selecting web based applications to be put in the cloud.  Then a number of startups formed to make this new computing environment easier to use; methods of communicating the changing addresses, ways to persist storage, methods of monitoring and restarting resources in the cloud, and much more.

As cloud computing continued to evolve, the clouds started offering “better” features.  Amazon introduced persistent block storage (EBS) to provide “normal” storage, VPC to allow for better IP address management, and a host of other features that allow for more than just web applications to run in the cloud.  In this same timeframe a number of cloud providers entered the market with features and functions that were more closely aligned with “traditional” computing architectures.

The obvious question is what is driving these “improvements”?  Clearly the early clouds had captured developers and web applications without these capabilities – just look at the number of startups using the cloud (pretty much all of them).  I’d assert that the enterprise customers are driving the more recent cloud feature sets – since the enterprise has both serious problems and serious money to spend.   If this is true, then we can project forward on the likely path both the clouds and the enterprises will follow.

This brings us back to the role of the Virtual Machine.  Enterprises have learned over the years that details matter in complex systems.  Even though we want to move towards application development that doesn’t touch the hardware or operating systems objects, we must recognize that there is important work done at this level – hardware control, the creation and management of sockets, memory management, file system access, etc.  No matter how abstract the applications become, there is some form of an operating system that works with these low level constructs.  Further, changes at the operating system level can affect the whole system – think Windows automatic updates, Linux YUM updates, new packages or kernel patches have caused whole systems to fail; this is the reason that enterprises tightly control these updates.  This means in turn that the enterprise needs to have control of their operating systems if they want to use their software and management policies, and the way that you control your operating system in the cloud is with VMs.

Enterprise requirements are driving the evolution and adoption of the cloud and this will make the use of VMs even more important than it has been to date. Cloud providers know that enterprise customers are critical to their own success and will make sure that they deliver a cloud model that feels familiar and controllable to enterprise IT and developers.

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

CloudEXPO Stories
Big Switch's mission is to disrupt the status quo of networking with order of magnitude improvements in network e ciency, intelligence and agility by delivering Next-Generation Data Center Networking. We enable data center transformation and accelerate business velocity by delivering a responsive, automated, and programmable software-dened networking (SDN) fabric-based networking solution. Traditionally, the network has been viewed as the barrier to data center transformation as legacy networking architectures hinder IT organizations with brittle, complex and cumbersome switch-by-switch management paradigms and in exible, proprietary hardware choices that are increasingly unable to keep up with the pace required of businesses today.
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O'Reilly author.
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings on all ticket types where you can save significant amount of money by purchasing your conference tickets today.
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker containers gain prominence. He explored these challenges and how to address them, while considering how containers will influence the direction of cloud computing.