Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Pat Romanski, William Schmarzo

Related Topics: @CloudExpo, Java IoT, Microservices Expo

@CloudExpo: Article

Architecting Beyond Cloud Computing’s Horseless Carriage

Consider how Cloud Computing’s unique characteristics will change how you do architecture

Today is a wonderful time for anyone interested in Cloud Computing to be working with the US government. On the one hand, the government considers Cloud to be strategically important, and they already have a track record as an early adopter of Cloud Computing on a grand scale. On the other hand, the government is also in the unique position of being able to drive standards for the approach—and in fact, they are even responsible for establishing the most widely adopted definition of Cloud Computing.

The federal agency who has taken this leadership position is the National Institute for Standards and Technology (NIST), an agency of the US Department of Commerce. NIST’s formal definition of Cloud Computing is already well known—“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Concise as that definition is, it only marks the beginning of the work NIST is doing to formalize and standardize the full breadth of Cloud Computing approaches, both within the government as well as for the world at large.

I learned about the breadth of NIST’s work on Cloud last week, when I had the pleasure of attending the NIST Cloud Computing Forum & Workshop. They are leading a cross-industry effort to “provide thought leadership and guidance around the cloud computing paradigm to catalyze its use within industry and government. NIST aims to shorten the adoption cycle, which will enable near-term cost savings and increased ability to quickly create and deploy enterprise applications. NIST aims to foster Cloud Computing systems and practices that support interoperability, portability, and security requirements that are appropriate and achievable for important usage scenarios.” To this end, they have followed up their formal definition with a Cloud Computing Technology Roadmap, which consists of three volumes: requirements to further US government adoption, information for Cloud adopters, and technical considerations for government Cloud Computing deployment decisions. They have also published a Cloud Computing reference architecture and standards roadmap.

To be sure, NIST has generated a daunting quantity of information here—but ignore these documents at your peril. If you work for a US government agency, then you likely have a mandate to move toward Cloud Computing, and NIST spells out many of the details. But even if you have nothing to do with the government, it’s important to remember that NIST is also a standards body in its own right, as well as a coordinating agency for other standards bodies. No other group or agency anywhere else in the world has achieved the same leadership position with respect to today’s nascent Cloud standards efforts.

One of the main reasons NIST is able to maintain this position is because they have an inclusive approach. Want to contribute? You’re welcome to. Have an issue with something in one of the documents? Then let them know. After all, one of the reasons they’ve generated so much content is because they have so many contributors, not just from the government, but from people around the world.

Even ZapThink isn’t above joining the fray. We’ve reviewed NIST’s documents in the context of ZapThink’s eye for agile enterprise architecture, and we’ve identified a missing link. Of course, looking at something and trying to identify what’s missing is always difficult, especially when so many contributors have already pored over the material so carefully. The trick is to break out of “horseless carriage” patterns of thinking: instead of considering the Cloud to be little more than an outsourced, virtualized data center, put on your architect’s hat and consider how Cloud Computing’s unique characteristics will change how you do architecture.

NIST’s Cloud Deployment Scenarios
I found this missing link when I reviewed the Cloud deployment scenarios in the NIST Standards Roadmap document. Here is their diagram illustrating the eight generic deployment scenarios that they identified (source: NIST)

They sort the various deployment scenarios into three categories:

Single Cloud

  • Scenario 1: Deployment on a single Cloud
  • Scenario 2: Manage resources on a single Cloud
  • Scenario 3: Interface enterprise systems to a single Cloud
  • Scenario 4: Enterprise systems migrated or replaced on a single Cloud

Multiple Clouds (serially, one at a time)

  • Scenario 5: Migration between Clouds
  • Scenario 6: Interface across multiple Clouds
  • Scenario 7: Work with a selected Clouds

Multiple Clouds (simultaneously, more than one at a time)

  • Scenario 8: Operate across multiple Clouds

From ZapThink’s perspective, the most interesting of these are scenarios 1, 3, and 4, because they consider the relationships between enterprise systems and the Cloud. ZapThink has written about these relationships before, most recently in The Keys to Enterprise Public Cloud, but also back in mid-2010, when we discussed Cloud Architecture’s Missing Link.

The missing link we pointed out in that ZapFlash was the ability to compose Cloud-based Services with on-premise Services as part of an enterprise SOA effort. It could be argued, however, that composing Cloud-based Services falls under Scenario 3, since Services are a type of interface. But there’s more to this story—and to understand how the NIST folks missed it, it’s important to follow their line of reasoning.

NIST’s Blind Spot
NIST has divided their Cloud standards efforts into three categories: interoperability, portability, and security. Interoperability standards are the most straightforward, especially for anyone who has worked with Web Services, which of course are little more than standards-based interfaces intended to promote interoperability and loose coupling.

Portability standards are more complicated, because NIST considers both application portability and data portability. In the Cloud context, application portability centers on the ability to move virtual machine (VM) instances from one Cloud to another. Data portability, however, is more difficult, because applications process different kinds of data, and those data flow throughout an entire system. For one organization, data portability might mean moving a single database from one Cloud to another, but for a different organization, the requirement might be for the portability of an entire SaaS application, along with all of its distributed data.

NIST’s focus on interoperability and portability (and security, of course, which is an entire conversation in its own right) makes perfect sense in light of their focus on standards, since the standardization of these three capabilities will go a long way in furthering NIST’s core mission. So it’s no wonder that their three Cloud deployment scenarios that involve enterprise systems consist of deploying or migrating to a Cloud (facilitated by portability standards), or interfacing with a Cloud (facilitated by interoperability standards).

It should come as no surprise, therefore, that NIST missed another deployment scenario: building applications that leverage both on-premise and Cloud-based capabilities, where those applications rely upon more than interoperability, portability, and the ubiquitous security.

Building applications that are compositions of Cloud-based and on-premise Services is a simple example, but doesn’t go far enough, because even this scenario falls into the “horseless carriage” trap of considering the Cloud to be nothing more than a virtualized data center. Factor elasticity into the equation, however, and we must consider new approaches to architecting such applications that go beyond considerations of interoperability and portability.

Building the Cloud’s Inherent Elasticity into Hybrid Applications
More than any other characteristic, elasticity distinguishes true Clouds from simple virtualized data centers. If your app requires more resources, the Cloud will provision those resources automatically, and then release them when you’re done with them—until you need them again. Furthermore, those elastic resources may be among any of the different types of Cloud resources (networks, servers, storage, applications and Services, as per the NIST definition), or any combination thereof.

As a result, when you architect your app, you don’t know how many of each of these resources you will be using at any point in time, since the number can change with no warning. You must take this change into account when architecting your data, your middleware, your execution environments, your application logic, and your presentation tier—in other words, your entire distributed application.

Cloud providers do their best to hide the underlying complexity inherent in delivering elastic infrastructure. But when you’re building a hybrid app—that is, one that includes Cloud-based as well as on-premise capabilities—your architects must have deeper knowledge of the underlying capabilities of the Cloud environment than Cloud providers are typically comfortable revealing. In other words, even once Cloud interoperability and portability standards mature, architects will still require additional information about the underlying capabilities of their Cloud environments that such standards won’t cover.

The ZapThink Take
This ZapFlash may leave you wanting more  namely, how precisely do you architect with the Cloud in mind? Unfortunately, there isn’t enough room for the answer to that question in this ZapFlash, but fear not, we’ll be laying out more details in the weeks and months to come.

Can’t wait? Then come to our Licensed ZapThink Architect SOA & Cloud course in San Diego January 16-19. We’ll dive deep into architecting with the Cloud, as we enjoy warm breezes off the Pacific from our lush venue by Seaforth Marina on Quivira Basin. Or better yet, take advantage of ZapThink’s new Cloud Competency Center by dropping us a line at [email protected]. We’re here to help!

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

CloudEXPO Stories
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.