Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Virtualization & Cloud Computing: Perfect Together

Will the amount of cloud capacity double every 18 months?

Reuven Cohen's Blog

Recently I've been asked about the benefits of cloud computing in comparison to that of virtualization. Generally my answer has been they are an ideal match. For the most part virtualization has been about doing more with less (consolidation). VMware in particular positioned their products and pricing in a way that encourages you to use the least amount of servers possible. The interesting thing about cloud computing is it's about doing more with more. Or if you're Intel, doing more with Moore.

At Intel's core, they are a company driven by one singular mantra, "Moore's Law". According to wikipedia, Moore's law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. The observation was first made by Intel co-founder Gordon E. Moore in a 1965 paper.

Over the last couple years we have been working very closely with Intel, specifically in the areas of virtualization. During this time we have learned a lot about how they think and what drives them as an organization. In one of my early pitches we described our approach to virtualization as "Doing more with Moore" A kind of play on the common phases "doing more with less" combined with some of the ideas behind "Moore's Law" which is all about growth and greater efficiencies. They loved the idea, for the first time someone was looking at virtualization not purely as a way to consolidate a data center but as a way to more effectively scale your overall capacity.

What is interesting about Moore's law in regards to cloud computing is it is no longer just about how many transistors you can get on a single CPU, but more about how effectively you spread your compute capacity on more then one CPU, be it multi-core chips, or among hundreds, or even thousands of connected servers. Historically the faster the CPU gets the more demanding the applications built for it become. I am curious if we're on the verge of seeing a similar "Moore's Law" applied to the cloud? And if so, will it follow the same principals? Will we start to see a "Ruv's Law" where every 18 months the amount of cloud capacity will double or will we reach a point where there is never enough excess capacity to meet the demand?

 

More Stories By Reuven Cohen

An instigator, part time provocateur, bootstrapper, amateur cloud lexicographer, and purveyor of random thoughts, 140 characters at a time.

Reuven is an early innovator in the cloud computing space as the founder of Enomaly in 2004 (Acquired by Virtustream in February 2012). Enomaly was among the first to develop a self service infrastructure as a service (IaaS) platform (ECP) circa 2005. As well as SpotCloud (2011) the first commodity style cloud computing Spot Market.

Reuven is also the co-creator of CloudCamp (100+ Cities around the Globe) CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas and is the largest of the ‘barcamp’ style of events.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.