Welcome!

@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Pat Romanski

Related Topics: @CloudExpo

@CloudExpo: Article

How Do You Qualify Something as Cloud Computing?

Jeff Bezos: "You don't generate your own electricity, why generate your own computing?"

Melvin Lancelot's Blog

Practically every one with an online business model is now referring to their service as cloud computing - starting from your average Joe hosting firm all the way to the SaaS/S+S vendors, every wants to ride the next buzzword wave, and this distorts the cloud computing term all together. Part of the reason being that historically the term "cloud" loosely referred to anything that’s available online/on the Internet. Ask a bunch of geeks and you would get a different explanation of cloud computing from each person (if you ever asked a bunch of people what Web 2.0 is you know what I mean). So how can we qualify if a service is really leveraging the cloud computing model?

That’s a tough question. An easier way to answer this is by first examining the behavior of services provided by some of the well known cloud computing vendors. Lets take Amazon and Google as two examples, both have different business models but under the hood when we use their cloud computing service within our application/service the 3 common behavior that we see from it are:

a. Scalability
b. Availability
c. Economical/cost effective

Lets discuss scalability first. Imagine you're tasked to design an online application/service that should be "Internet scalable". Imagine (if you will) that you're designing the next big social networking/ the next big YouTube etc. How do you go about designing it to support millions of users? For that matter how does anyone do it?

The short answer - you do that iteratively. Iteratively is a good euphemism, the reality is that you do that after several design blunders and limitations :). In the iterative approach you would first design a cost effective solution to scale for a smaller audience and then when you start seeing more traffic you add more hardware till the point it doesn’t improve anything then you start to redesign for better scalability. This is how Amazon and Google have grown as well and have designed their overall system to support such high scalability - but interestingly they have abstracted their design to a degree that it could be repackaged into a subscription-based service - a cloud computing service.

But first how do you build a massively scalable solution? How would you build your architecture to accommodate linear scalability?

Any architecture can be described as being composed of two types of components:

a. Stateless components
b. State-full components

Stateless components are those that only do some processing on data and don’t persist state - hence are easily to scale via a scaled out design which as a byproduct also gives you higher availability, also using scaled out architecture you can keep adding inexpensive boxes to the system thereby reducing your cost while the system continue to keep humming.

Statefull components on the other hand are those that persist state of resources that it need to work with - for e.g., File system, databases, BLOBs etc. traditionally the only option to scale these would be to via a scaled-up approach - i.e. beef up the hardware on the server. This is more expensive than a scaled out architecture and introduces single point of failure. Traditionally we have been using approaches such as data replication etc to scale out these components but it introduces several complications. Fundamentally statefull components are the ones that impact overall performance and scalability of a system.

Statefull components may be notoriously difficult to scale, however even stateless components can present scalability challenges when your planning for massively scalable/internet scalability scenarios. Cloud computing vendors have made significant investments in technology to ensure that compute intensive processing can been parallelized and distributed to the max. Googles MapReduce is a elegant approach to solving these challenges.

We need a new breed of products to handle how stateless and statefull components can be scaled and distributed, the traditional approach of persisting state in the database etc just doesn’t make sense, increasingly we see that the trend is not to store atomic level transaction information in a database as we used to but store it in the form of blobs, but at the same time ensure that we do so while managing resources economically. Large part of Amazons and Googles innovation (their magic sauce if you will) in cloud computing involves developing these proprietary components.

Another way of looking at the scalability that cloud computing gives you is the ability to scale your computing resources as an when you see demand - after all it’s a pay as you go model. Traditionally online companies have been provisioning for hardware to meet spikes in traffic (like holiday seasons etc). That would also imply that they are unnecessarily paying for the extra scale which they don’t leverage all the time. By hosting it via a cloud vendor they can dramatically reduce their operational cost.

Well its not just Amazon and Google that are thinking in this direction, several platform vendors like Microsoft and other open source groups are releasing products are that will address these challenges. You can expect to see some radically different products being released from these vendors that address the distributed massively-scalable challenges. You can also expect (hosting)companies to leverage these prepackaged cloud computing capabilities and provide it as a subscription service.

Cloud computing vendors are able to provide Internet scalability at an affordable cost and can potentially give you a better SLA that if you were to manage your own infrastructure - that’s the overall package that makes cloud computing so compelling, probably best described by Jef Bezos - "You don't generate your own electricity, why generate your own computing?". Arguably there are other factors that influence your vendor decision but we hope that the next time your evaluating a cloud computing vendor/solution or building your own, you know what to look for.

 

More Stories By Melvin T Lancelot

Melvin Lancelot is a technical architect working for the consulting group at Aditi Technologies. On a day to day basis he helps ISVs and enterprises succeed by leverage the right blend of technology, platform and market trends. He contributes to a blog at http://techturks.blogspot.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
OpsRamp is an enterprise IT operation platform provided by US-based OpsRamp, Inc. It provides SaaS services through support for increasingly complex cloud and hybrid computing environments from system operation to service management. The OpsRamp platform is a SaaS-based, multi-tenant solution that enables enterprise IT organizations and cloud service providers like JBS the flexibility and control they need to manage and monitor today's hybrid, multi-cloud infrastructure, applications, and workloads, including Microsoft Azure. We are excited to partner with JBS and look forward to a long and successful relationship.
Apptio fuels digital business transformation. Technology leaders use Apptio's machine learning to analyze and plan their technology spend so they can invest in products that increase the speed of business and deliver innovation. With Apptio, they translate raw costs, utilization, and billing data into business-centric views that help their organization optimize spending, plan strategically, and drive digital strategy that funds growth of the business. Technology leaders can gather instant recommendations that result in up to 30% saving on cloud services. For more information, please visit www.Apptio.com.