@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Demilitarized Cloud (DMC) Pattern

DMC is a simple pattern to start with on the path to cloud adoption

The traditional web hosting architecture is built around a common three-tier web application model that separates it into presentation (web servers), application (application servers) and persistence (database servers) tiers. From the security perspective, while database and applications servers almost always reside in the secure internal network, web servers could possibly go into DMZ if external access needs to be provided to the application from the untrusted network.

We are all very much familiar with what a Demilitarized Zone (DMZ) is. Wikipedia defines it as a physical or logical sub-network that contains and exposes an organization’s external services to a larger untrusted network, usually the internet.  Its purpose is to segregate the highly sensitive infrastructure into a secure internal network leaving only the publicly accessible services outside. While this does not mean in any way that services in DMZ are not secure, it certainly implies that sensitivities attached to these services are not as high as those hosted within the secure internal network – a key point that will be elaborated further down in the article for identifying candidates for cloud migration.

Typically, services that reside in a DMZ are those that need to be publicly accessible from the untrusted network. Web servers, web emails, proxy servers, file access (e.g. FTP), VPN services and DNS are some such examples. Putting web servers in DMZ ensures that internal web services cannot be compromised by providing direct access to them from the untrusted network. Similarly, with the increasing number of remote and mobile users, it has become necessary to employ email frontends and reverse proxy servers in DMZ to ensure that the primary email server is never directly exposed to the untrusted network. Motivations for other services listed above are not different.

Having elaborated the concept of DMZ, it needs to be emphasized at this point that the purpose of this article is not to discuss DMZ architecture or replicate it in the cloud, but to utilize it for identifying appropriate candidates for cloud migration. It should help enterprise architects in setting up policies and coming up with architecture blueprints for their publicly accessible applications.

When it comes to public cloud adoption, security is the primary concern of most of the senior IT leaders I talk to. Since this has more to do with the fear of the unknown (a new public platform) than anything else, the solution I propose as a cloud strategist is aimed at gradually increasing their comfort level with it. That is, instead of taking a lift and shift approach of moving an entire platform (e.g. email, document sharing, etc.) or an application in one shot, it is advisable to move one component or tier at a time. The former is a case of vertical partitioning of the organization’s IT environment which may seem to be a logical approach towards cloud migration because of the perceived standalone nature of that particular platform or application. But, as we all know, today’s IT environment is not siloed. All the systems and data repositories are interlinked in some way or the other. Thus, moving a platform or an application to the cloud does come with the overhead of modifications to its interfaces to different systems, if not entirely rebuilding them.

The horizontal partitioning approach of moving one tier at a time does have its own share of modifications of interfaces as well. But, it has the advantage of being gradual while going along the path of increasing level of comfort of the senior management. Migrating smaller logical components of the application involves lesser number of risks than those for the entire platform. It involves lesser amount of testing after the migration, and makes it easy to roll out and roll back because of its smaller size as compared to the entire application. Building and maintaining redundancy (parallel environments) during any migration effort is highly critical. Since it is more economical to build a parallel environment for a smaller piece than the entire application, chances of service disruption due to lack of sufficient infrastructure are practically eliminated.

The question now becomes where to start. A good way to kick off this approach is to start with the migration of the outermost tier that has the least stringent security requirements. As seen above, this outermost tier comprises of services hosted in DMZ. They have restricted access to the internal network. And, they all interface with the external untrusted network at the other end. By moving these services to the cloud, we are essentially creating our DMZ in the cloud giving rise to a new architectural pattern that I call Demilitarized Cloud (DMC). We now have a hybrid computing environment that spans across both the on-premise network and the cloud.

Surrounding the migrated services with a security envelope such as a Virtual Private Cloud, maintaining the same level of limited access to the secure internal network, and interfacing with the untrusted network at the other end helps create a DMZ-like secure logical sub-network within the cloud. Integration with the on-premise systems gives us the same application environment that is now hybrid in nature and which retains trade secrets, proprietary data or any other highly sensitive assets in the same secure internal network.

With DMC as the starting point, cloud migration roadmap can evolve along the lines of multi-tiered web application architecture. Having moved the presentation tier to the cloud, we can then focus on moving application tier in the next iteration. By then, IT departments would have gained sufficient public cloud security knowledge to feel comfortable about migration of more components in there. We can move app servers, service buses, databases or any combination or subset of those assets based on technical and business priorities.

To summarize, DMC is a simple pattern to start with on the path to cloud adoption. It helps in identifying less security-sensitive services that could be good first candidates for cloud migration. Because of the horizontal partitioning approach, it paves the way for migrating other tiers of the application stack in the future. The pace of this migration can be easily controlled by the IT leadership depending upon their comfort-level. Unlike the vertical partitioning approach, this approach provides the flexibility even to roll-back the migration gradually and without major disruptions. It is a good first step to get you feet wet without the fear of drowning.

More Stories By Ravi Bhangley

Ravi is an accomplished IT leader with 20 years of work experience, and strong concentration and success in management, strategy and vision. Before launching BizEnablers, a consulting firm specializing in enterprise cloud strategy and implementation, Ravi was Chief Architect at Dun & Bradstreet, world’s leading source of commercial information and insights into businesses. For last 10 years of working for corporate IT, he held several senior leadership positions spearheading diverse programs and organizations. He currently sits on the IT Advisory Board of New Jersey Technology Council (NJTC), the foremost organization of technology companies in New Jersey. He is actively engaged in cloud computing thought leadership through publications and frequent participations as a speaker or a panelist. He holds a Masters in Computer Science from Michigan State University.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.