Welcome!

@CloudExpo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, Aruna Ravichandran

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Demilitarized Cloud (DMC) Pattern

DMC is a simple pattern to start with on the path to cloud adoption

The traditional web hosting architecture is built around a common three-tier web application model that separates it into presentation (web servers), application (application servers) and persistence (database servers) tiers. From the security perspective, while database and applications servers almost always reside in the secure internal network, web servers could possibly go into DMZ if external access needs to be provided to the application from the untrusted network.

We are all very much familiar with what a Demilitarized Zone (DMZ) is. Wikipedia defines it as a physical or logical sub-network that contains and exposes an organization’s external services to a larger untrusted network, usually the internet.  Its purpose is to segregate the highly sensitive infrastructure into a secure internal network leaving only the publicly accessible services outside. While this does not mean in any way that services in DMZ are not secure, it certainly implies that sensitivities attached to these services are not as high as those hosted within the secure internal network – a key point that will be elaborated further down in the article for identifying candidates for cloud migration.

Typically, services that reside in a DMZ are those that need to be publicly accessible from the untrusted network. Web servers, web emails, proxy servers, file access (e.g. FTP), VPN services and DNS are some such examples. Putting web servers in DMZ ensures that internal web services cannot be compromised by providing direct access to them from the untrusted network. Similarly, with the increasing number of remote and mobile users, it has become necessary to employ email frontends and reverse proxy servers in DMZ to ensure that the primary email server is never directly exposed to the untrusted network. Motivations for other services listed above are not different.

Having elaborated the concept of DMZ, it needs to be emphasized at this point that the purpose of this article is not to discuss DMZ architecture or replicate it in the cloud, but to utilize it for identifying appropriate candidates for cloud migration. It should help enterprise architects in setting up policies and coming up with architecture blueprints for their publicly accessible applications.

When it comes to public cloud adoption, security is the primary concern of most of the senior IT leaders I talk to. Since this has more to do with the fear of the unknown (a new public platform) than anything else, the solution I propose as a cloud strategist is aimed at gradually increasing their comfort level with it. That is, instead of taking a lift and shift approach of moving an entire platform (e.g. email, document sharing, etc.) or an application in one shot, it is advisable to move one component or tier at a time. The former is a case of vertical partitioning of the organization’s IT environment which may seem to be a logical approach towards cloud migration because of the perceived standalone nature of that particular platform or application. But, as we all know, today’s IT environment is not siloed. All the systems and data repositories are interlinked in some way or the other. Thus, moving a platform or an application to the cloud does come with the overhead of modifications to its interfaces to different systems, if not entirely rebuilding them.

The horizontal partitioning approach of moving one tier at a time does have its own share of modifications of interfaces as well. But, it has the advantage of being gradual while going along the path of increasing level of comfort of the senior management. Migrating smaller logical components of the application involves lesser number of risks than those for the entire platform. It involves lesser amount of testing after the migration, and makes it easy to roll out and roll back because of its smaller size as compared to the entire application. Building and maintaining redundancy (parallel environments) during any migration effort is highly critical. Since it is more economical to build a parallel environment for a smaller piece than the entire application, chances of service disruption due to lack of sufficient infrastructure are practically eliminated.

The question now becomes where to start. A good way to kick off this approach is to start with the migration of the outermost tier that has the least stringent security requirements. As seen above, this outermost tier comprises of services hosted in DMZ. They have restricted access to the internal network. And, they all interface with the external untrusted network at the other end. By moving these services to the cloud, we are essentially creating our DMZ in the cloud giving rise to a new architectural pattern that I call Demilitarized Cloud (DMC). We now have a hybrid computing environment that spans across both the on-premise network and the cloud.

Surrounding the migrated services with a security envelope such as a Virtual Private Cloud, maintaining the same level of limited access to the secure internal network, and interfacing with the untrusted network at the other end helps create a DMZ-like secure logical sub-network within the cloud. Integration with the on-premise systems gives us the same application environment that is now hybrid in nature and which retains trade secrets, proprietary data or any other highly sensitive assets in the same secure internal network.

With DMC as the starting point, cloud migration roadmap can evolve along the lines of multi-tiered web application architecture. Having moved the presentation tier to the cloud, we can then focus on moving application tier in the next iteration. By then, IT departments would have gained sufficient public cloud security knowledge to feel comfortable about migration of more components in there. We can move app servers, service buses, databases or any combination or subset of those assets based on technical and business priorities.

To summarize, DMC is a simple pattern to start with on the path to cloud adoption. It helps in identifying less security-sensitive services that could be good first candidates for cloud migration. Because of the horizontal partitioning approach, it paves the way for migrating other tiers of the application stack in the future. The pace of this migration can be easily controlled by the IT leadership depending upon their comfort-level. Unlike the vertical partitioning approach, this approach provides the flexibility even to roll-back the migration gradually and without major disruptions. It is a good first step to get you feet wet without the fear of drowning.

More Stories By Ravi Bhangley

Ravi is an accomplished IT leader with 20 years of work experience, and strong concentration and success in management, strategy and vision. Before launching BizEnablers, a consulting firm specializing in enterprise cloud strategy and implementation, Ravi was Chief Architect at Dun & Bradstreet, world’s leading source of commercial information and insights into businesses. For last 10 years of working for corporate IT, he held several senior leadership positions spearheading diverse programs and organizations. He currently sits on the IT Advisory Board of New Jersey Technology Council (NJTC), the foremost organization of technology companies in New Jersey. He is actively engaged in cloud computing thought leadership through publications and frequent participations as a speaker or a panelist. He holds a Masters in Computer Science from Michigan State University.

@CloudExpo Stories
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
First generation hyperconverged solutions have taken the data center by storm, rapidly proliferating in pockets everywhere to provide further consolidation of floor space and workloads. These first generation solutions are not without challenges, however. In his session at 21st Cloud Expo, Wes Talbert, a Principal Architect and results-driven enterprise sales leader at NetApp, will discuss how the HCI solution of tomorrow will integrate with the public cloud to deliver a quality hybrid cloud e...
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...