Welcome!

@CloudExpo Authors: Yeshim Deniz, Pat Romanski, Jason Bloomberg, Zakia Bouachraoui, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, IBM Cloud

@CloudExpo: Article

Reference Architecture for Cloud Computing

IBM releases second version of its cloud reference architecture

Admittedly, when I was heads-down in code earlier in my career, I did not pay much attention to reference architectures. We had our own internal architectures that served as ‘the way and the truth', and reference architectures for our product or solution domain were simply out of scope.  Anyway, reference architectures are, by design, not detailed enough to steer someone implementing one out of hundreds of components that will fall under said architectures. So, for the most part I ignored them, even though I could hear rumblings coming from rooms full of folks arguing over revision 25 of the reference architecture for some problem domain or another.

Fast forward a few years to a change of professional venue, and my outlook on reference architectures is a good deal different. If I were still developing, I'm sure my outlook would be much the same. However, talking with users on a frequent basis has made me aware that such architectures and solution domain overviews can be of great value to both buyers and providers. For buyers, reference architectures can help to orient them in a particular domain, and they can guide implementation and buying strategies. For providers, reference architectures serve to clearly communicate their outlook on a particular domain to both the buyers and broader market. Put simply, reference architectures serve both sides of the coin.

Now that's not to say that reference architectures come without their detractors. There are always those that stand ready to point out holes and biases in a particular provider's reference architecture. In fact, some seem to completely write off reference architectures as an instrument of marketing. In my opinion, some of these complaints are without merit and a bit overly cynical. Other complaints rise above typical inter-vendor sniping and actually point out valid holes, oversights, and biases with a particular provider's architecture. Open discourse and communication is good. In that light, I was glad to see IBM publish the second version of its cloud computing reference architecture to the Open Group earlier this week.

The document, which you can download here, explains the reference architecture in detail, but I want to look at the major highlights. To start, let's consider the high-level diagram for the architecture:

As you can see, the architecture orients itself around user roles for cloud computing. On either end, you have the cloud service creator and cloud service consumer. As its name implies, the cloud service creator role includes any type of cloud service creation tools. These tools include software development environments, virtual image development tools, process choreographing solutions, and anything else a developer may use to create services for the cloud.

On the other side of the architecture, the cloud service consumer comes into focus. As you well know, in a cloud environment there are many potential service consumers. The architecture above accounts for in-house IT as well as cloud service integration tools as consumers. There are countless more, but just with these you can begin to appreciate the challenge of effectively enabling the ‘consumer.' This requires self-service portals, service catalogs, automation capability, federated security, federated connectivity, and more. It is certainly no small task.

Finally, in the middle of the diagram, we have perhaps the most complex role, the cloud service provider. This section builds on top of a shared, usually virtualized infrastructure to address two basic facets for providers: services and service management. From a services perspective, we see the trinity of the cloud (IaaS, PaaS, SaaS), with an added wrinkle, Business Process as a Service. As the diagram acknowledges, existing services and partner services will nearly always augment these services, thereby implying the need for tools that provide both functional and non-functional integration capabilities.

Opposite the services, we see the common management framework that divides into two major categories: Operational Support Services (OSS) and Business Support Services (BSS). Naturally, the OSS accounts for those capabilities that a provider needs to effectively operate a cloud environment. This includes provisioning, monitoring, license management, service lifecycle management, and a slew of other considerations. BSS outlines the capabilities providers need to support the business requirements of cloud, and this includes pricing, metering, billing, order management, order fulfillment, and more.

Of course, there are non-functional requirements that span all three roles including security, performance, resiliency, consumability, and governance. Thus, these wrap the three major roles in the reference architecture shown above.

I know there will be some that disagree with certain elements of this reference architecture, but that is good and healthy. For those that have strong opinions on this subject (one way or another), I encourage you to get involved. That is the benefit of this being in the Open Group. You can download the reference architecture, review it at your leisure, and then discuss and influence change via the mailing list discussion. In other words, speak up!

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

CloudEXPO Stories
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O'Reilly author.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next-gen applications and how to address the challenges of building applications that harness all data types and sources.