Welcome!

@CloudExpo Authors: Liz McMillan, Elizabeth White, Derek Weeks, Pat Romanski, Carmen Gonzalez

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Feed Post

Essential Cloud Computing Characteristics

According to NIST the cloud model is composed of five essential characteristics, three service models, & four deployment models

If you ask five different experts you will get maybe five different opinions what cloud computing is. And all five may be correct. The best definition of cloud computing that I have ever found is the National Institute of Standards and Technology Definition of Cloud Computing. According to NIST the cloud model is composed of five essential characteristics, three service models, and four deployment models. In this post I will look at the essential characteristics only, and compare to the traditional computing models; in future posts I will look at the service and deployment models.

Because computing always implies resources (CPU, memory, storage, networking etc.), the premise of cloud is an improved way to provision, access and manage those resources. Let's look at each essential characteristic of the cloud:

On-Demand Self-Service
Essentially what this means is that you (as a consumer of the resources) can provision the resources at any time you want to, and you can do this without assistance from the resource provider.

Here is an example. In the old days if your application needed additional computing power to support growing load, the process you normally used to go through is briefly as follows: call the hardware vendor and order new machines; once the hardware is received you need to install the Operating System, connect the machine to the network, configure  any firewall rules etc.; next, you need to install your application and add the machine to the pool of other machines that already handle the load for your application. This is a very simplistic view of the process but it still requires you to interact with many internal and external teams in order to complete it - those can be but are not limited to hardware vendors, IT administrators, network administrators, database administrators, operations etc. As a result it can take weeks or even months to get the hardware ready to use.

Thanks to the cloud computing though you can reduce this process to minutes. All this lengthy process comes to a click of a button or a call to the provider's API and you can have the additional resources available within minutes without. Why is this important?

Because in the past the process involved many steps and usually took months, application owners often used to over provision the environments that host their application. Of course this results in huge capital expenditures at the beginning of the project, resource underutilization throughout the project, and huge losses if the project doesn't succeed. With cloud computing though you are in control and you can provision only enough resources to support your current load.

Broad Network Access
Well, this is not something new - we've had the Internet for more than 20 years already and the cloud did not invent this. And although NIST talks that the cloud promotes the use of heterogeneous clients (like smartphones, tablets etc.) I do think this would be possible even without the cloud. However there is one important thing that in my opinion  the cloud enabled that would be very hard to do with the traditional model. The cloud made it easier to bring your application closer to your users around the world. "What is the difference?", you will ask. "Isn't it that the same as Internet or the Web?" Yes and no. Thanks to the Internet you were able to make your application available to users around the world but there were significant differences in the user experience in different parts of the world. Let's say that your company is based on California and you had a very popular application with millions of users in US. Because you are based in California all servers that host your application are either in your basement or in a datacenter that is nearby so that you can easily go and fix any hardware issues that may occur. Now, think about the experience that your users will get across the country! People from East Coast will see slower response times and possibly more errors than people from the West. If you wanted to expand globally then this problems will be amplified. The way to solve this issue was to deploy servers on the East Cost and in any other part of the world that you want to expand to.

With cloud computing though you can just provision new resources in the region you want to expand to, deploy your application and start serving your users.

It again comes to the cost that you incur by deploying new data centers around the world versus just using resources on demand and releasing them if you are not successful. Because the cloud is broadly accessible you can rely on having the ability to provision resources in different parts of the world.

Resource Pooling
One can argue whether resource pooling is good or bad. The part that brings most concerns among users is the colocation of application on the same hardware or on the same virtual machine. Very often you can hear that this compromises security, can impact your application's performance and even bring it down. Those have been real concerns in the past but with the advancement in virtualization technology and the latest application runtimes you can consider them outdated. That doesn't mean that you should not think about security and performance when you design your application.

The good side of the resource pooling is that it enabled cloud providers to achieve higher application density on single hardware and much higher resource utilization (sometimes going up to 75% to 80% compared to the 10%-12% in the traditional approach). As a result of that the price for resource usage continues to fall. Another benefit of the resource pooling is that resources can easily be shifted where the demand is without the need for the customer to know where those resources come from and where are they located. Once again, as a customer you can request from the pool as many resources as you need at certain time; once you are done utilizing those you can return them to the pool so that somebody else can use them. Because you as a customer are not aware what the size of the resource pool is, your perception is that the resources are unlimited. In contrast in the traditional approach the application owners have always been constrained by the resources available on limited number of machines (i.e. the ones that they have ordered and installed in their own datacenter).

Rapid Elasticity
Elasticity is tightly related to the pooling of resources and allows you to easily expand and contract the amount of resources your application is using. The best part here is that this expansion and contraction can be automated and thus save you money when your application is under light load and doesn't need many resources.

In order to achieve this elasticity in the traditional case the process would look something like this: when the load on your application increases you need to power up more machines and add them to the pool of servers that run your application; when the load on your application decreases you start removing servers from the pool and then powering them off. Of course we all know that nobody is doing this because it is much more expensive to constantly add and remove machines from the pool and thus everybody runs the maximum number of machines all the time with very low utilization. And we all know that if the resource planning is not done right and the load on the application is so heavy that the maximum number of machines cannot handle it, the result is increase of errors, dropped request and unhappy customers.

In the cloud scenario where you can add and remove resource within minutes you don't need to spend a great deal of time doing capacity planning. You can start very small, monitor the usage of your application and add more and more resources as you grow.

Measured Service
In order to make money the cloud providers need the ability to measure the resource usage. Because in most cases the cloud monetization is based on the pay-per-use model they need to be able to give the customers break down of how much and what resources they have used. As mentioned in the NIST definition this allows transparency for both the provider and the consumer of the service.

The ability to measure the resource usage is important in to you, the consumer of the service, in several different ways. First, based on historical data you can budget for future growth of your application. It also allows you to better budget new projects that deliver similar applications. It is also important for application architects and developers to optimize their applications for lower resource utilization (at the end everything comes to dollars on the monthly bill).

On the other side it helps the cloud providers to better optimize their datacenter resources and provide higher density per hardware. It also helps them with the capacity planning so that they don't end up with 100% utilization and no excess capacity to cover unexpected consumer growth.

Compare this to the traditional approach where you never knew how much of your compute capacity is utilized, or how much of your network capacity is used, or how much of your storage is occupied. In rare cases companies were able to collect such statistics but almost never those have been used to provide financial benefit for the enterprise.

Having those five essential characteristics you should be able to recognize the "true" cloud offerings available on the market. In the next posts I will go over the service and deployment models for cloud computing.

Read the original blog entry...

More Stories By Toddy Mladenov

Toddy Mladenov has more than 15 years experience in software development and technology consulting at companies like Microsoft, SAP and 3Com. Currently he is a CTO of Agitare Technologies, Inc. - a boutique consulting company that specializes in Cloud Computing and Big Data Solutions. Before Agitare Tech Toddy spent few years with PaaS startup Apprenda and more than six years working on Microsft's cloud computing platform Windows Azure, Windows Client and MSN/Windows Live. During his career at Microsoft he managed different aspects of the software development process for Windows Azure and Windows Services. He also evangelized Microsoft cloud services among open source communities like PHP and Java. In the past he developed enterprise software for German's software giant SAP and several startups in Europe, and managed the technical sales for 3Com in the Balkan region.

With his broad industry experience, international background and end-user point of view Toddy has an unique approach towards technology. He believes that technology should be develop to improve people's lives and is eager to share his knowledge in topics like cloud computing, mobile and web development.

@CloudExpo Stories
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, discussed the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, C...
"We formed Formation several years ago to really address the need for bring complete modernization and software-defined storage to the more classic private cloud marketplace," stated Mark Lewis, Chairman and CEO of Formation Data Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, will highlight the current challenges of these transformative technologies and share strategies for preparing your organization for these changes. This “view from the top” will outline the latest trends and developm...
“RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Avere Systems is a hybrid cloud solution provider. We have customers that want to use cloud storage and we have customers that want to take advantage of cloud compute," explained Rebecca Thompson, VP of Marketing at Avere Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.