Welcome!

@CloudExpo Authors: Liz McMillan, Jason Bloomberg, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Cloud Computing Reference Architectures, Models and Frameworks

Making sense of the many reference architectures, models and frameworks for cloud

Reference ‘Things’
A Reference Architecture (RA) “should” provide a blueprint or template architecture that can be reused by others wishing to adopt a similar solution. A Reference Model (RM) should explain the concepts and relationships that underlie the RA. At Everware-CBDI we then use the term Reference Framework (RF) as a container for both. Reference architectures, models and frameworks help to make sense of Cloud Computing.

Unfortunately, such formality is absent from the various reference architectures, models and frameworks that have been published for Cloud Computing; these frequently mix elements of architecture and model, and then apply one of the terms seemingly at random.

In developing the CBDI-Service Architecture and Engineering Reference Framework (SAE) in support of SOA (Service Oriented Architecture) Everware-CBDI separated out various parts as shown in figure 1. We developed a detailed RA for SOA and a RM for SOA, with particular emphasis on a rich and detailed Meta Model for SOA and a Maturity Model for SOA. We also developed a detailed process and task decomposition for SOA activities.

But the RF is easily generalized, as shown in figure 1, where the various elements could be applied to any domain, and explicit references for example to “SOA Meta Model” or “SOA Standards” etc., can be removed.

Generalized Reference Framework

Figure 1 – Generalized Reference Framework

The benefit of this approach is that elements of the framework can then be mapped to each other in different ways to support alternative perspectives such as different usage or adoption scenarios, or the viewpoint of an individual participant or organization. Whereas most of the Cloud Computing Reference architectures, models and frameworks proposed today apply to a single perspective.

Current Cloud Computing Reference Architecture, Models and Frameworks
As discussed there are many frameworks and models to choose from. It is not my intention to detail and critique them all individually. Credit must go to NIST who have already done much of that in their 2010 Survey of Cloud Architecture Reference Models.

We may classify Cloud reference models as one of two styles, either

Analysis of the these shows that they typically contain,

  • Roles – that would be better placed in the Organization section of an RF

  • Activities – which would be part of the Process Model

  • Layered Architecture – which would be part in the Reference Architecture

Used this way, the generalized RF in figure 1 becomes a useful tool to analyze proposed Cloud Computing Reference architectures, models and frameworks in terms of understanding better what they actually contain, and a basis for development of an enterprise specific framework.

Everware-CBDI recommend that is more useful to model the capabilities required for Cloud Computing rather than to list them all as activities - as that may imply processes and tasks which is not always the case. Across the industry capability modeling is rapidly becoming the de facto standard approach to business design, and it seems highly appropriate to use the technique in planning Cloud frameworks. Using this technique capabilities are separated from the processes that use them and from roles that possess them, and consequently mapped in different ways to show different scenarios. The capability model would be in the RM section of the RF and should be used extensively in disciplines such as roadmap planning, process improvement, technology planning, service management etc.

A useful source of capabilities is provided by the Cloud Computing Reference Model/Architecture in The Role of Enterprise Architecture in Federal Cloud Computing published by the American Council for Technology.

Figure 2 takes the various elements from these different architectures, models and framework and places them into a generic RF. The intention here is not to reinvent the wheel, but consolidate the elements contained across the different reference architectures, models and frameworks for Cloud Computing into a unified framework.

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Figure 2 - Cloud Computing Elements Placed in Generic Reference Frameworks

Elements highlighted in green are usually covered by existing Cloud Computing reference architectures, models and frameworks. These focus primarily on the operational state of the life cycle, and the implementation and deployment architectures.

Mapping
Once the various elements have been placed into their appropriate part of the RF, then you can start mapping them to suit different scenarios. For example activities in the process decomposition can be mapped against roles – either organizational roles or people roles - perhaps using RAEW, as shown in Table 1.

Table Mapping Process Activities to Roles

Table 1 – Mapping Process Activities to Roles

At a high level, table 1 may appear a bit obvious, but at a more detailed level it helps to understand where and by whom these activities will be performed in your organization, or how it might differ in specific scenarios from the proposed reference architectures mentioned so far.

In some scenarios it may be required that the cloud consumer performs certain cloud management activities not just the provider. Whilst the cloud provider may be required to provide the necessary management capabilities, both the consumer and provider perform management activities.

Hence mapping capabilities to role in table 2 is another useful exercise, understanding who provides and who uses various capabilities. Whilst the NIST, IBM and other reference architectures do show this, as mentioned earlier their view is focused primarily on the operational state, and on the mapping of capabilities required in the operational infrastructure. As table 2 shows the span of responsibility and capability is very much wider than the operational perspective!

Table Mapping Capabilities to Roles

Table 2 – Mapping Capabilities to Roles

Recommendations
The value of a reference framework is to provide a consistency in such aspects as terminology, deliverables and governance across an organization. To understand the totality of the task and to manage the adoption in a proactive manner rather than allowing uncontrolled experimentation. This permits sensible reuse across the whole spectrum of capabilities and avoids the necessity for each enterprise to reinvent the wheel, and to make mistakes that could have been avoided.

I recommend organizations

  • Build their own reference framework. This should be applicable to their

    1. Current and planned maturity states for cloud computing. See the Everware-CBDI research note on Cloud Computing Maturity Model

    2. Primary role(s) – as provider, consumer, broker, etc

  • Expect to customize public domain reference framework materials to suit their specific purpose

  • Consider how they will address those sections not covered by public domain reference framework materials (the pink areas in Figure 2)

  • Consider how the capability requirements change when viewed from a purely cloud consumer perspective which may be the case when there is just tactical use of public cloud, to that of more enterprise-wide usage involving private cloud, and perhaps integration of public, private, and non-cloud apps (see Service Portfolio Planning and Architecture for Cloud Services for an enterprise perspective)

Recommended Resources

More Stories By Lawrence Wilkes

Lawrence Wilkes is a consultant, author and researcher developing best practices in Service Oriented Architecture (SOA), Enterprise Architecture (EA), Application Modernization (AM), and Cloud Computing. As well as consulting to clients, Lawrence has developed education and certification programmes used by organizations and individuals the world over, as well as a knowledgebase of best practices licenced by major corporations. See the education and products pages on http://www.everware-cbdi.com

CloudEXPO Stories
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed by some of the world's largest financial institutions. The company develops and applies innovative machine-learning technologies to big data to predict financial, economic, and world events. The team is a group of passionate technologists, mathematicians, data scientists and programmers in Silicon Valley with over 100 patents to their names. Big Data Federation was incorporated in 2015 and is ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.