@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Open Source Cloud

@CloudExpo: Article

Interfaces for Private and Public Cloud Computing

Public and private cloud computing from the perspective of their different application scope and interfaces

An entire ecosystem is evolving around cloud computing. Interface standardization efforts, commercial products, cloud infrastructure and management services, virtual appliance providers and open-source solutions are filling niches in the cloud ecosystem. The role and position of a component or a service in the ecosystem are defined by its capabilities, the consumers of those capabilities and its relationship with other components and services.

This article presents public and private cloud computing from the perspective of their different application scope and interfaces.

Interfaces for Public Cloud Computing
Public or external clouds
offer virtualized resources as a service, enabling the deployment of an entire IT infrastructure without the associated capital costs, paying only for the used capacity. Amazon EC2,ElasticHostsGoGrid and FlexiScale are examples of commercial cloud providers of elastic capacity, offering a public interface for remote management of virtualized server instances within their proprietary infrastructure. With the growing popularity of these cloud offerings, an ecosystem of tools is emerging that can be used to transform an organization's existing infrastructure into a public cloud. Technologies, such as Globus Nimbus or Eucalyptus, provide an open-source implementation of cloud-like public interfaces, and projects, such as RESERVOIR, are developing open-source toolkits for building any cloud architecture.

The standardization of a public cloud interface is the aim of the OGF Open Cloud Computing Interface Working Group. OCCI-WG is delivering an API specification for remote management of cloud computing infrastructure, allowing for the development of interoperable tools for common tasks on public clouds including deployment, autonomic scaling and monitoring. Main consumers of this API would be service management platforms, technologies for building hybrid clouds, or service providers. The working group keeps a complete list of existing cloud APIs and a list of references to studies comparing the APIs. The requirements for the new specification are being extracted from a collection of use cases contributed by the community. The working group is being supported by relevant companies and open-source initiatives in the cloud computing ecosytem.

Interoperability is not only about standardization of interfaces, but also about portability of virtual machines. The DMTF Open Virtualization Format (OVF) can be used as a means for customers of an IaaS provider to express their infrastructural needs. OVF was not designed with cloud computing in mind, so there are issues that need to be solved when applied to this environment, in particular, on automatic elasticity, self-configuration and deployment constraints. In any case, standards for cloud interoperability (OCCI) and virtual machine portability (OVF) are imminent and many providers are planning to adopt them.

Interfaces for Private Cloud Computing
On the other hand, there is a growing interest in tools for leasing compute capacity from the local infrastructure. The aim of these deployments is not to expose to the world a cloud interface to sell capacity over the Internet, but to provide local users with a flexible and agile private infrastructure to run service workloads within the administrative domain. This private or enterprise cloud model is not new, since datacenter management has been around for a while. In fact, I would venture that  future datacenters will look like private clouds.  Platform VM OrchestratorVMware VSphereCitrix Cloud Center, and Red Hat Enterprise Virtualization Manager are commercial tools for managament of virtualized services on the datacenter, so aimed at building private clouds. OpenNebula Virtual Infrastructure Engine (now part of Ubuntu) is an open-source alternative for private cloud computing, also supporting hybrid cloud deployments to supplement local infrastructure with computing capacity from an external cloud.

Private cloud interfaces should so allow the integration of the virtualized distributed infrastructure in the data-center management stack, including user and administration support. A private cloud interface should provide rich enough semantics, far beyond of that provided by public clouds, to ease this integration. Such interface should provide additional functionality for virtualization, networking, image and physical resource configuration, management, monitoring and accounting, not exposed by pubic cloud interfaces.

The standardization of a private cloud interface may be the aim of the new DMTF Cloud Computing Incubator, given that, according to its charter, one of its benefits is to enable the use of cloud computing within enterprises. The DMTF Open Cloud Standards Incubator Leadership Board currently includes most of main providers and integrators of private cloud solutions. On the other hand, although conceived as a library to interface with different virtualization technologies, the libvirt virtualization API could be also used as interface for private cloud computing. This is the approach represented by the libvirt implementation of OpenNebula. The implementation of libvirt on top of a virtual infrastructure manager provides an abstraction of a whole cluster of resources (each one with its hypervisor), so a whole cluster can be managed as any other libvirt node.

About Using Public Interfaces for Private Cloud Deployments
The usage of public cloud interfaces to access the local infrastructure would reduce the cost of learning a new interface when moving from a private to a public; but at the expense of providing local users with limited functionality, losing the comfort and control of data center operations, and using, within the administration domain, communication protocols and security mechanisms originally created for remote management. Moreover, several local cloud technologies support cloudbursting to build hybrid clouds, so combining local infrastructure with public cloud-based infrastructure and enabling highly scalable hosting environments.

That does not mean, of course, that you can not expose a public interface on top of your private cloud solution. For example if you want to provide partners or external users with access to your infrastructure, or to sell your overcapacity. Obviously, a local cloud solution is the natural back-end for any public cloud.

More Stories By Ignacio M. Llorente

Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, and has contributed to several Cloud Computing panels and roadmaps. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM.

CloudEXPO Stories
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?