Click here to close now.


@CloudExpo Authors: Pat Romanski, SmartBear Blog, Ian Khan, Jason Bloomberg, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, Cloud Security, SDN Journal

@CloudExpo: Article

The Facts About Cloud High Availability and Disaster Recovery

Understanding the facts about HA and DR in the cloud can help you make informed decisions

Enterprises are moving more and more applications to the cloud. Gartner predicts that the bulk of new IT spending by 2016 will be for cloud computing platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017.1

The far-reaching impact of cloud computing is summarized in a recent McKinsey report on disruptive technologies: "Cloud technology has the potential to improve productivity across $3 trillion in global enterprise IT spending, as well as enabling the creation of new online products and services for billions of consumers and millions of businesses alike."2

For many organizations, moving applications that can tolerate brief periods of downtime to the cloud is a straightforward decision with clear benefits. However, concerns about how to provide high availability and disaster protection in the cloud may make this decision more difficult for business-critical applications such as SQL, SAP, and Exchange. Understanding the facts about HA and DR in the cloud can help you make informed decisions about moving applications to the cloud, while ensuring the important business operations that depend on them are protected from downtime and data loss.

Fact #1: You need high availability protection in a cloud.
Do not assume that your cloud environment provides high availability protection, unless you have specifically configured it for HA. In fact, according to a recent study: "The average unavailability of cloud services is 10 hours per year or more, while the average availability is estimated to be 99.9% far less than the expected availability of business critical applications."3 That is the equivalent of more than a day of downtime. In fact, in 2014, Microsoft Windows Azure, Google, and Amazon Web Services all had some measure of service interruptions or downtime ranging from 4 minutes to several hours.4

For business critical applications, the redundancy that you can get with some cloud solutions, such as Windows Azure, is not enough. When you consider the cost of a minute of downtime for applications, such as SQL Server, Oracle, and SAP that may run many of your key business processes, it becomes clear that you need true high availability and disaster recovery protection. You need to ensure that end users have immediate access to data and applications in the event of a local failure, a regional disaster or anything in between.

However, the traditional way of providing high availability protection is to build a cluster using two identical servers - a primary server and a standby server -  with shared (typically SAN) storage. If the primary server fails, the application operation is moved to the standby server, which has immediate access to the same storage. The problem is that SANs are not only expensive to buy, manage, and maintain, they are simply not an option in public cloud offerings. There are, however, high availability solutions that can be used in a cloud that do not require a SAN.

Fact #2: You can build a cluster in a cloud.
Even though you cannot have a SAN in a cloud, you can build a cluster for high availability protection. In a Windows cloud, you simply add SANLess cluster software to your Windows Server Failover Cluster (WSFC). The SANLess software uses real time, block level replication to keep local storage in two geographic regions of the cloud synchronized. If there is an outage, the application operation is automatically moved to the remote instance, which has immediate access to current data. The synchronized storage looks to the WSFC like a traditional shared storage so there is no added complexity or specialized skills needed to build or manage a SANLess cluster. In fact, a SANLess cluster is easy to manage and has the added benefit of eliminating the single point of failure risk of a SAN. SANLess clusters also provide complete configuration flexibility, allowing you to replicate between physical, virtual, cloud, and hybrid cloud environment as well as between SAN and SANLess clusters.

Fact #3: You can have geographically separated nodes for DR in a cloud.
While providing high availability within the cloud will protect you from normal hardware failures and other unexpected outages within an availability zone (Amazon) or fault domain (Azure), you still need to protect against regional disasters. The easiest solution is to configure a multisite (geographically separated) cluster.

One effective method is to build a SANLess cluster within a cloud and extend it for disaster recovery by adding another node(s) in an alternate data center or a different geographic region within the cloud. Unlike traditional clusters that require you to have identical hardware and software in every node, a SANLess cluster allows you to mix physical, cloud and hybrid cloud configurations. The benefits of a DR configuration are clear. For example, simply adding a third, geographically separated node to your SANLess cluster in a Windows Azure cloud can give you a recovery point objective (RPO) of near zero data loss and a recovery time objective (RTO) of just about one minute.

Fact #4: You can create a cluster that mixes cloud and on-premises nodes.
You can use your on-premises data center as your primary location with a failover cluster to provide high availability protection and use the cloud as your hot standby DR site. This is a very cost-effective alternative to building out your own DR site, or renting rack space in a business continuity facility. In this case, the on-premises servers can be your choice of traditional SAN-based clusters, SANLess clusters, or even single servers not currently participating in a cluster.

The objective of having a "hot" standby DR site is to have standby servers up and running as quickly as possible in the DR site with access to a copy of the most recent application data. In the event of a disaster, recovery is automatic and immediate. A multisite cluster is an effective way to implement a hot standby DR site. In this case, the SANLess date. In the event of a forecasted disaster, such as a storm or a flood, applications can be moved to the cloud before potential disaster strikes. In the event of an unexpected disaster, applications can be recovered manually or in some cases automatically, depending upon the quorum configuration. This mix of cloud and on-premises nodes gives you an excellent RTO and RPO with minimal investment in infrastructure.

Fact #5: HA and DR in a cloud can be easy and highly cost-effective.
If you choose a SANLess software that provides an intuitive configuration interface, you can create a standard WSFC in a cloud in minutes without specialized skills. A SANLess cluster can help you realize significant cost savings in several ways. First, in a Microsoft SQL Server environment a SANLess cluster can give you high availability with SQL Server Standard Edition software licenses without requiring you to upgrade to costly SQL Server Enterprise Edition.

Second, you can realize hundreds of thousands of dollars in savings with a SANLess by eliminating the total cost of ownership (TCO) associated with a SAN. The savings in TCO include the SAN hardware acquisition costs; the power, cooling, and data center floor space costs; and the ongoing labor cost of specialized SAN administration.

If you are thinking about moving your important applications to the cloud, you need to consider how you will protect those applications from downtime and data loss. While traditional SAN-based clusters are not possible in these environments, SANLess clusters can provide an easy, cost-efficient alternative. These clusters not only provide high availability protection, but also enable significantly greater configuration flexibility and potentially dramatic savings in both licensing costs and SAN TCO.


1"Gartner Says Cloud Computing Will Become the Bulk of New IT Spend by 2016."

2 Manyika, James and Michael Chui, et al, "Disruptive technologies: Advances that will transform life, business, and the global economy," McKinsey Global Institute (May 2013) 

3Whittaker, Josh, "Amazon Web Services Suffers Outage, Takes Out Vine, Instagram, Others with it," ZDNet, (August 26, 2013)

4Mackay, Martin, "Downtime Report: Top Ten Outages in 2013,", (December 2013)

More Stories By Jerry Melnick

Jerry Melnick ([email protected]) is responsible for defining corporate strategy and operations at SIOS Technology Corp. (, maker of SIOS SAN and #SANLess cluster software ( He more than 25 years of experience in the enterprise and high availability software industries. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
Most of the IoT Gateway scenarios involve collecting data from machines/processing and pushing data upstream to cloud for further analytics. The gateway hardware varies from Raspberry Pi to Industrial PCs. The document states the process of allowing deploying polyglot data pipelining software with the clear notion of supporting immutability. In his session at @ThingsExpo, Shashank Jain, a development architect for SAP Labs, discussed the objective, which is to automate the IoT deployment proces...
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
In demand-intensive mobile and web applications, an emerging pattern is to host the Systems of Engagement in the cloud (for maximum responsiveness) but keep the Systems of Record with the other important business systems in the company datacenter, often on a tightly secured mainframe. But what about the space in between? In this IBM Redpaper publication, we show that the IBM Bluemix cloud platform offers technologies that make it easy for cloud-based SoEs to securely connect to on-premises IBM...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Su...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Cloud computing is unquestionably one of the driving forces of DevOps, as the automation of operations transforms enterprise software development. DevOps, however, is more than a technology trend, as it represents a move toward silo-busting, self-organizing horizontal teams that drive business velocity. At the same time, enterprise Digital Transformation represents an upheaval across the enterprise, as customer preferences and behavior drive enterprise technology decisions. This transformation ...
SYS-CON Events announced today that Catchpoint, a global leader in monitoring, and testing the performance of online applications, has been named "Silver Sponsor" of DevOps Summit New York, which will take place on June 7-9, 2016 at the Javits Center in New York City. Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.Founde...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
In recent years, at least 40% of companies using cloud applications have experienced data loss. One of the best prevention against cloud data loss is backing up your cloud data. In his General Session at 17th Cloud Expo, Sam McIntyre, Partner Enablement Specialist at eFolder, presented how organizations can use eFolder Cloudfinder to automate backups of cloud application data. He also demonstrated how easy it is to search and restore cloud application data using Cloudfinder.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, exploreed the current state of IoT connectivity and review key trends and t...
Actifio is powering new application development and testing services from Net3 Technologies (N3T), a managed cloud services provider. N3T's new Symmetry DevOps™ service builds on its existing Palmetto Virtual Data Center (PvDC) Cloud services for data backup and disaster recovery (DR) based on the Actifio Copy Data Virtualization platform. Previously, N3T's data protection and DR services were challenged by overlapping and inefficient legacy hardware and software platforms from multiple vendo...
Countless business models have spawned from the IaaS industry – resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his general session at 17th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, an IBM Company, broke down what we have to work with, discussed the benefits and pitfalls and how we can best use them ...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem"...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound...