Welcome!

@CloudExpo Authors: Ed Featherston, Rostyslav Demush, Jamie Madison, Jason Bloomberg, Greg Pierce

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

Understanding "Clouded" Terms of Cloud Computing

The definitions vary from source to source, author to author

Michael Sheehan's "GoGrid" Blog

There seems to be a lot of debate around different types of Computing Terms being used to describe server and hosting solutions. In fact, in the past, the blogosphere seemed to throw around terms like Grid, Cloud, Utility, Distributed and Cluster computing almost interchangeably. But, as of this revision, one term is rising to the top: Cloud Computing. (See recent trend analysis here.)

The definitions vary from source to source, author to author. While I cannot (and will not) attempt to articulate the end-all definition, I can write about how I view these terms and how they apply to the products that we offer, namely GoGrid. But before I dive into MY interpretation, providing what others view on these subjects may shed some light on our framework.

Terms as defined by Wikipedia

wikipedia_logo_sm Many people view Wikipedia as an authoritative source of information but that is always subject to debate. Wikipedia defines some of these terms as follows (not the end-all definitions though) and I have taken some liberties of removing non-relevant information for argument’s sake:

  • Grid Computing - http://en.wikipedia.org/wiki/Grid_computing
    • Multiple independent computing clusters which act like a “grid” because they are composed of resource nodes not located within a single administrative domain. (formal)
    • Offering online computation or storage as a metered commercial service, known as utility computing, computing on demand, or cloud computing.
    • The creation of a “virtual supercomputer” by using spare computing resources within an organization.
  • Cloud Computing - http://en.wikipedia.org/wiki/Cloud_computing
    • Cloud computing is a computing paradigm shift where computing is moved away from personal computers or an individual application server to a “cloud” of computers. Users of the cloud only need to be concerned with the computing service being asked for, as the underlying details of how it is achieved are hidden. This method of distributed computing is done through pooling all computer resources together and being managed by software rather than a human.
    • The services being requested of a cloud are not limited to using web applications, but can also be IT management tasks such as requesting of systems, a software stack or a specific web appliance.
  • Utility Computing - http://en.wikipedia.org/wiki/Utility_computing :
    • Conventional Internet hosting services have the capability to quickly arrange for the rental of individual servers, for example to provision a bank of web servers to accommodate a sudden surge in traffic to a web site.
    • “Utility computing” usually envisions some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the “back end” to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer. The technique of running a single calculation on multiple computers is known as distributed computing.
  • Distributed Computing - http://en.wikipedia.org/wiki/Distributed_computing
    • A method of computer processing in which different parts of a program are run simultaneously on two or more computers that are communicating with each other over a network. Distributed computing is a type of segmented or parallel computing, but the latter term is most commonly used to refer to processing in which different parts of a program run simultaneously on two or more processors that are part of the same computer. While both types of processing require that a program be segmented—divided into sections that can run simultaneously, distributed computing also requires that the division of the program take into account the different environments on which the different sections of the program will be running. For example, two computers are likely to have different file systems and different hardware components.

Upon initial read, Wikipedia seems to be fairly close to my definitions but still not exact. Of note, “metered commercial service” rings true within both the Grid Computing and Cloud Computing definitions. However, it also seems to spill into the Utility Computing mantra. As a side note, our newest product, GoGrid, utilizes a metered service similar to how an energy company would charge you for electricity or gas, basing charges simply on what you use.

Traditional “Clouds” vs. Modern “Clouds”

Grid Computing seems to also have some origins in the idea of harnessing multiple computer resources to gain a more powerful source of shared power and computational resources. folding@home However, I would like to suggest that this definition is showing some age and, in my opinion, falls more under Distributed Computing. When I think about Distributed Computing, [email protected] or [email protected] come to mind, which is definitely very different from where things are moving now. So, let’s put Distributed Computing aside for this discussion.

Traditionally, the “cloud” was loosely defined as anything outside of a controlled network. When we, as Hosting Providers, discussed “the cloud” in the past with our customers, it was about the nebulous network that is known as the Internet. The cloud is loosely managed and traditionally unreliable. To that end, we do not refer to anything within our control or our networks as “the cloud” as it is too vague and un-manageable. It is outside of our Service Level Agreement and nothing that we can guarantee or deem reliable. However, once traffic enters our network, we manage it. That is where the modern interpretation of the “Cloud” comes into play. Products like Amazon’s EC2 and ServePath’s GoGrid have internalized Cloud Computing by building a reliable infrastructure around it. While the Internet remains as a Cloud of coupled servers and networks, GoGrid, for example, extends this by creating an infrastructure that offers “control in the cloud.”

Originally, I wrote that “Cloud Computing does not necessarily equate to reliable service.” This, obviously, is a contradiction in itself if you apply both the historic and modern definitions at the same time. If one views the Internet as “Cloud Computing,” there are obvious weaknesses within this vast network. With the Internet, you are at the whim of various service providers, Internet backbones and routers managing the traffic within the Cloud. But if one applies the more modern interpretations of this, Cloud Computing now offers robust infrastructure, features and services that were previously unavailable.

Tying the Grid to the Cloud

In order to provide “modern” Cloud Computing, a provider must have some sort of an organized and controlled network infrastructure and topology. What any particular service provider chooses is up to them. For GoGrid, we elected to build our Cloud offering on top of a Grid of servers as well as utilize a Utility-based billing model to only charge the end-user for what they use within our “Cloud.” The end-result is a tightly controlled Grid infrastructure that provides a Cloud Computing experience, more so than most if not all of the other hosting providers out there.

However, what is important here is looking at Cloud, Grid and/or Utility Computing from the perspective of a Hosting Provider. Definitely this is where things get contentious. As I mentioned before, GoGrid offers a traditional utility billing process where you simply pay for what you uses. This breaks from many “old school” hosting billing processes of paying up-front for server(s) and bandwidth, month or year-long contracts and then paying for overages. Does this mean that it is Utility Computing? Not really. One has to dig into this a bit more. GoGrid uses a network of similarly-configured servers bound together by management and administrative servers and virtualization tools to provide a very unique Cloud offering that is distinct from traditional hosting.

Dedicated, Managed and Cloud Servers offered by ServePath guarantee hardware resources like RAM and Load Balancing and full root and administrator access but these paths rapidly diverge at this point. Once one steps into the virtualization arena, or dare I say “the cloud,” new features are available including rapid deployments, cloning, snapshots, fault tolerance, and on-demand scalability.

ServePath chose Grid Computing to power GoGrid and provide the flexibility, scalability and robust infrastructure as the fundamental foundation of an award-winning Cloud Infrastructure product, GoGrid. The end results is a Cloud Hosting Provider offering that delivers better environmental properties, faster vertical and horizontal scalability and ultimately better fits for cost, performance and energy-concerned customers.


[This appeared originally here and is republished by kind permission of the author, who retains copyright.]

More Stories By Michael Sheehan

Michael Sheehan is the Technology Evangelist for Cloud Computing Infrastructure provider GoGrid and ServePath and is an avid technology pundit. GoGrid is the cloud hosting division of ServePath Dedicated Hosting, a company with extensive expertise and experience in web hosting infrastructure. Follow him on Twitter.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
maydbs 10/30/08 12:11:04 PM EDT

Hi Michael,

even thought you wrote this a few months ago, i got to read it today and thought it was very current, if you notice how much resistance we've been having to Cloud Computing. I linked this post on the blog i'm working on to illustrate that maybe some people are just too afraid because they dont understand about it enough! let me know if it offends you in anyway and i'll remove it immediately. Tks!

Snehal Antani 07/17/08 12:57:37 PM EDT

So in response to a blog post that positions Grid and Cloud computing, I've expanded on my thoughts in this area. You can read the discussions at: http://www-128.ibm.com/developerworks/forums/thread.jspa?threadID=214794...

The post is as follows:
I see "Grid Computing" as the coordinated execution of a complex task across a collection of resources. The burden of the "grid" is to provide control over that execution (initiating the execution, stopping the execution, aggregating results of the execution, reporting the status of the execution, and so on).

The "grid" should be transparent to the end user. Take [email protected] for example. The "User" is the researcher that will analyze the results of all of the computations. The [email protected] platform manages partitioning the data into discrete chunks, dispatching & monitoring the results across the collection of CPU's spread across the world, and aggregating the results to provide a single view to the "user".

Cloud Computing on the other hand provides the virtualized infrastructure upon which the Grid Endpoints will execute. So for example, the "cloud" would provide an "infinite" number of operating system images on which the [email protected] software would execute. The cloud shouldn't care about application-specific data, nor should it care about the business logic that is actually executing within a virtualized image. The cloud cares about allocating new images (synonymous to LPAR's) for applications to run, keeping track of how much physical resources (actual CPU cycles for example) the virtual images consumed, cleaning up the virtual images upon completion, and billing the client for the amount of resources consumed. So with these definitions, going back to my example of [email protected], I would argue that this software has both a grid computing component as well as a cloud computing component, where the # of registered computers is part of a pool of hardware resources that already have the [email protected] grid application containers installed and ready to go), but we should be sure to see 2 separate components and responsibilities: the decision to pick a physical machine to dispatch to, and the grid container that executes the scientific processing.

To summarize, grid applications and therefore the "Grid Computing" paradigm, which I consider an application architecture and containers for running the business logic, would execute on top of an "infrastructure cloud", which appears as an infinite # of LPAR's.

BTW, we've had the ability to run'private clouds' for 30-40 years - multi-tenancy via S/390 & MVS - and we do it all over the place today. The key difference is that today, w/ Amazon EC2 for example, we can dynamically create and then destroy complete 'LPARS' relatively cheaply; whereas in the mainframe and other big iron hardware, LPAR's tend to be statically defined. In both cases the hardware is virtualized under the covers, some sort of VM/hypervisor contains the operating system image, some type of application server or container executes the business logic, and some type of workload manager assures workload priorities and provides the chargeback.

I've alluded to some of this in my article on 'enterprise grid and batch computing': http://www-128.ibm.com/developerworks/websphere/techjournal/080 4_antani/0804_antani.html.

The Parallel Job Manager (described in that article) in WebSphere XD Compute Grid would essentially be the Grid Manager, whose job is to coordinate the execution of complex tasks across the cluster of resources (Grid Execution Environments). Today we don't discuss the ability to dynamically create new LPAR's (and therefore call ourselves a cloud computing infrastructure), but you can easily do this with a product like Tivoli Provisioning Manager. Basically, take the bottom image in my article: http://www-128.ibm.com/developerworks/websphere/techjourn al/0804_antani/0804_antani.html#xdegc and connect Tivoli Provisioning Manager to the On-Demand Router (part of WebSphere Virtual Enterprise).

Snehal Antani 06/29/08 06:41:06 PM EDT

Excellent article. This provides an excellent survey of the technologies and helps put each of them in perspective. I wrote an article that compliments this one; I discuss how various patterns from each of these domains can be brought together to provide a complete grid and batch solution: http://www.ibm.com/developerworks/websphere/techjournal/0804_antani/0804...

Blue Grid 06/06/08 04:00:15 PM EDT

IBM has devoted 200 researchers to its cloud computing project.

@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...