@CloudExpo Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Muscle in the Cloud

So what does that mean?

In many documents and introductions to cloud computing we have been shown what the Cloud really means and what is possible to do inside it.  Things that have been covered have been what systems make up the cloud right up to applications and services running in it.

This blog entry aims to covers another aspect and that is computation in the Cloud.

So what does that mean?
Well going back in history a bit we started off with terminology such as super computing or clustering, to do this involved using 5 or more physical computer systems linked together over a network connection running specific software to allow all machines to run as a single super computer.  The types of software used at this time were Parallel Virtual Machine or PVM and Message Passing Interface or MPI, these were the two common standards.

No more metal systems

Today we now have the Cloud so we no longer need bare metal systems since we can run the whole thing within the Cloud environment.  Software to run clusters in the Cloud are built around a technique called Map Reduce, simply this allows a master system to accept a job requiring some work, then split that work up among his workers thus splitting up the job into smaller parts and working towards the answer collectively.

This level of computing has been proven to be very powerful from sorting huge amounts of data in minutes versus hours to being tested to predict stock exchange fluctuations. Also, the practical applications that this ideology could satisfy is huge, pretty much anything from encoding/decoding video and audio streams to computing weather predictions and generating alerts in real time.

Main benefit of cloud computing
The key main benefit of using the Cloud over the traditional methods of building super computers, is that you have the flexibility of scalability in that if the job requires more workers, then more workers can be spawned on demand to pick up the slack where needed.  This would be done on the fly very similarly if not identically to scaling a service like a web application under load.

Closing consideration
In the Cloud you only pay for what you use, there are no costs regarding infrastructure, power and footprint to worry about.  So it is expected to see large growth here with the costs making economic sense over the amount of raw computing power you could have at your fingertips.

Read the original blog entry...

More Stories By Arjan de Jong

Arjan de Jong is Sales & Marketing Manager of Basefarm and has been working in the Internet industry since 1997.

CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.