Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, @DXWorldExpo, SDN Journal, FinTech Journal

@CloudExpo: Blog Feed Post

A Design for an Agile Cloud Management Platform

Good administration of the control plane so that applications perform and that regulatory concerns and risks are mitigated

In the early part of 2013, EMC announced a new storage virtualization product called ViPR that delivers a software interface to block, object and HDFS storage services layered on heterogeneous storage. As part of that announcement there was an architectural discussion regarding how ViPR would be providing these services to applications that entails breaking out the design into two components: the control plane and the data plane.

The control plane provides common interfaces for provisioning, policy & management while the data plane provides interfaces for data access from applications. In separating out these two layers, EMC creates an architecture that is agile and enables new services to be added over time without impacting production services. Since ViPR is focused on storage, it will, unfortunately, never be expanded to encompass an entire cloud management stack. However, the architecture is interesting and aspects of it lend itself to building a best-of-breed cloud management platform.

Starting with the control/data plane concept, I broke apart the cloud into multiple planes and then focused on how the layers would communicate to enable a cloud management platform that inherently scaled in and out as well as up and down. Figure 1 illustrates the outcome of this effort and each plane is described in more detail below:

Figure 1: Plane-based Architecture

Application Plane – This plane focuses on the deployment, lifecycle and management of the running applications. In today’s lingo, this is part of the Platform-as-a-Service (PaaS) architecture. The thing about PaaS is that services need to be designed to “plug-in” to the PaaS container in order for them to be made available to the applications. In this architecture, the application plane now has a common interface to manage the data plane and the compute plan. Hence, those services are now available to the application regardless of their underlying location or implementation.

Data Plane – Since this is a comprehensive cloud management platform, I’ve moved the data plane out up in the architecture to become an aggregation layer for all types of data including databases, files, as well as, operational information created by the compute and control planes. Through these two layers’ designs, I can now build business applications as well as a single pane of glass to manage my entire cloud regardless of the physical components that make up the cloud infrastructure.

Compute Plane – This plane manages the resources for operating the application and data planes. The integration of the application and data planes no longer need to rely on a single hypervisor to support its cloud design which provides tremendous freedom for the cloud applications and data to migrate where the economics (performance, costs, etc.) are best.

Control Plane – This plan implements the interfaces for operations staff to configure and operate the underlying hardware platforms that comprise the cloud infrastructure. It implements governance and access controls as well as supports policies for deciding where resources should be allocated from when requested from the compute plane. The compute and hardware planes together deliver a consolidated and unified cloud resource pool regardless of the underlying componentry.

Hardware Plane – This is the physical equipment that will comprise the cloud infrastructure resource pool.

Figure 2 illustrates how the integrated stack would look and how the planes communicate to drive independence and agility.

Figure 2: Integration Pathways

While this is all conceptual, it follows many of the existing patterns from building Service Oriented Architectures (SOA). What’s really interesting to me about this architecture is that everything above the control and hardware planes can scale out in a common manner with a single set of tools and interfaces, hence, driving toward single pane of glass. It also puts a lot of emphasis on good administration of the control plane so that applications perform and that regulatory concerns and risks are mitigated.

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

CloudEXPO Stories
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?