Welcome!

@CloudExpo Authors: Yeshim Deniz, Carmen Gonzalez, Zakia Bouachraoui, Pat Romanski, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Business Critical Apps | @CloudExpo #Cloud #BigData #IoT #API #AWS #Azure

Understanding the truths and myths of HA and DR in cloud deployments can dramatically reduce data center costs and risks

The "New Data Center"
The New Data Center has arrived. Over the past decade we have seen the migration from physical servers to virtual machines and now to public cloud, private cloud and hybrid cloud. Each of these migrations has taken a similar path. Test, dev and non-critical workloads are the first to make the move. As the technology matures, business critical tier 1 applications eventually make the move as well. At this point the percentage of applications still running directly on physical servers is rapidly declining. As Cloud IaaS technology such as AWS and Azure matures, many companies are moving their tier 1 applications directly to the cloud along with the rest of their infrastructure.

Figure 1: The New Data Center

This movement to the cloud was predicted by Gartner analyst in October of 2013.

"The use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend, according to Gartner, Inc. 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017." [1]

This rapid adoption of the cloud puts the pressure on the cloud providers to deliver on their promises of flexibility, agility and availability. Before moving business critical applications to the cloud IT needs to ensure that doing so will not mean sacrificing performance, availability, or disaster protection.

Cloud Availability - 99.95% Uptime "Guaranteed"
Both Amazon Web Services (AWS)[2]and Microsoft Azure[3]offer Service Level Agreements that guarantee 99.95% uptime, which equates to roughly 22 minutes of downtime per month. However, if you read both SLAs carefully, you will see that in order to qualify for the SLA you have to deploy two or more instances per region across different "Availability Zones"[4] or "Fault Domains"[5].

Amazon's Availability Zones and Microsoft's Fault Domains are essentially the same concept. Within any geographic region of their IaaS cloud offerings there are sections of infrastructure that are independent of each other, meaning they have no compute, network, storage or power in common. AWS regions have two to three Availability Zones whereas Azure by default allows for two Fault Domains per "Availability Set," but has recently added a feature that allows up to three Fault Domains[6].

What the SLA guarantees is that 99.95% of the time you should be able to reach at least one instance, assuming you have two or more instances running in different Fault Domains or Availability Zones. While that works fine for applications like the web servers shown in Figure 2 and non-transactional application servers where you can simply load balance between instances for high availability and scalability, what do you do for transactional applications like database servers where the data is dynamic? Something must be done to keep the instances in sync with each other.

Figure 2: Azure Fault Domains

Another consideration is that all they are guaranteeing is "dial tone." They don't guarantee the application will be up and running or even that it will be performing at an acceptable level.

Note that even the leading cloud providers, including Microsoft and Amazon, have had downtime events in the past 12 months. According to CloudHarmony[7] Amazon EC2 and EBS combined had 46 outages ranging from 19 seconds to 2.8 hours in the 365 days previous to June 16, 2015. Microsoft Azure Virtual Machines and Object Storage in the same time period had 242 outages ranging from 10.4 minutes to 13.16 hours.

If your cloud provider doesn't meet their SLAs, what is the impact on your organization? At the end of the day all it really means is you get refunded a fraction of your bill for the month that the downtime occurred in as shown in Figure 3.

Figure 3: Service Credits for Missed SLAs

If a 25% discount for 13.16 hours of downtime does not seem like an even trade, you have to protect your applications from downtime. Traditionally downtime has been minimized by deploying failover clusters.

Failover Clusters in the Traditional Data Center
Failover clusters have been the traditional mechanism to ensure high availability for transactional applications that are deemed business critical. A traditional failover cluster has the following properties:

  • Two or more "nodes": A failover cluster is made up of a group of servers (aka nodes) that act as safety nets for each other. If one node fails, then one of the remaining nodes will continue to run the clustered workload.
  • Shared Storage: Each cluster node must have access to the same data set, which is typically stored on a shared disk, SAN or iSCSI array.
  • System level monitoring: A cluster uses a heartbeat mechanism to detect failures of an entire system and initiate recovery action to make sure the clustered work load continues to run on one of the remaining cluster nodes.
  • Application level monitoring: Failure of an application to perform properly is detected and recovery action is taken. In some cases the application can be recovered in place, otherwise the application workload will be moved to the standby server.
  • Planned Maintenance: An application workload can be moved from one node to another with minimal downtime to allow planned maintenance to be done on the backup nodes without scheduling significant downtime.
  • Client redirection: Clients connecting to the cluster workload will automatically be reconnected to the active node in the cluster whenever the workload moves between cluster nodes.

Failover Clusters in the Cloud
For business-critical applications in the cloud, failover clusters are still the best way to ensure that applications remain highly available. A failover cluster in the cloud is required in order to ensure that should a failure occur in one Azure Fault Domain or AWS Availability Zone fail, another node in a separate domain or zone will be able to recover with minimal down time.

A failover cluster traditionally requires shared storage. In most cloud environments, including both AWS and Azure, shared storage that supports failover clustering is not available. In these cases you have two alternatives: use replication options that come with the application or use third-party SANLess cluster solutions.

Application based replication
Many applications have built-in features that allow for replication and high availability without the use of a SAN. Solutions such as SQL Server AlwaysOn Availability Groups[8], Exchange Server Database Availability Groups[9], DFS-R[10] and Oracle Streams[11] are just some of the examples of replication features built into applications that may help provide availability within cloud deployments.

Each solution mentioned above will have to be understood completely before you embark on your deployment as there are usually restrictions and/or limitations associated with each solution.

SANLess Clusters
Third-party host-based replication[12] solutions have been around since the 1990s and help with high availability and disaster recovery of business-critical applications. Check with your cloud provider to see which solutions are certified for use in their cloud. Choose a SANless clustering software that is easy to implement and configure and is fully integrated with industry-standard clustering solutions. For example, ensure SANless clustering software can be added to a standard Windows Server failover clustering environment - enabling it to be used in cloud, hybrid cloud, and virtual environments where shared storage is impossible or impractical. This software also enables a greater degree of configuration flexibility, enabling you to create hybrid cluster environments with any combination of physical, virtual, and cloud.

The benefit of a SANLess cluster is that it behaves the same as a traditional cluster, except it uses local storage instead of shared storage. Most applications have supported traditional clusters for many years and administrators are familiar with the features and functionality. They are particularly useful if you have many different applications to protect as you can manage them all with the same technology.

Summary
The new data center is inevitable. The benefits of flexibility and agility are just too enticing to ignore. However, it is imperative that AVAILABILITY not be taken for granted. It is incumbent upon your cloud architecture team to understand the steps that must be taken to ensure that tier 1 business critical applications are highly available.

References

  1. http://www.gartner.com/newsroom/id/2613015
  2. http://aws.amazon.com/ec2/sla/
  3. http://azure.microsoft.com/en-us/support/legal/sla/
  4. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
  5. https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-manage-availability/
  6. http://azure.microsoft.com/en-us/documentation/templates/101-create-availability-set-3fds/
  7. https://cloudharmony.com/status-1year-for-aws
  8. https://msdn.microsoft.com/en-us/library/ff877884.aspx
  9. https://technet.microsoft.com/en-us/library/Dd638137(v=EXCHG.150).aspx
  10. https://msdn.microsoft.com/en-us/library/Bb540025(v=VS.85).aspx
  11. http://www.oracle.com/technetwork/database/information-management/streams-fov-11g-134280.pdf
  12. http://www.linuxclustering.net/2012/11/07/host-based-replication-vs-san-replication/

More Stories By David Bermingham

David Bermingham is recognized within the technology community as a high availability expert and has been honored by his peers by being elected to be a Microsoft MVP in Clustering since 2010. His work as director of Technical Evangelist at SIOS has him focused on evangelizing Microsoft high availability and disaster recovery solutions as well as providing hands on support, training and professional services for cluster implementations.

David holds numerous technical certifications and draws from over twenty years of experience in IT, including work in the finance, healthcare and education fields, to help organizations design solutions to meet their high availability and disaster recovery needs. He has recently begun speaking on deploying highly available SQL Servers in the Azure Cloud and deploying Azure Hybrid Cloud for disaster recovery.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
The Crypto community has run out of anarchists, libertarians and almost absorbed all the speculators it can handle, the next 100m users to join Crypto need a world class application to use. What will it be? Alex Mashinsky, a 7X founder & CEO of Celsius Network will discuss his view of the future of Crypto.
In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, as they represent yet another frontier for cyber-attacks. Complete visibility has never been more important-or more difficult. Powered by AI, Darktrace's Enterprise Immune System technology is the only solution to offer real-time visibility and insight into all parts of a network, regardless of its configuration. By learning a ‘pattern of life' for all networks, devices, and users, Darktrace can detect threats as they arise and autonomously respond in real time - all without impacting server performance.
Today, Kubernetes is the defacto standard if you want to run container workloads in a production environment. As we set out to build our next generation of products, and run them smoothly in the cloud, we needed to move to Kubernetes too! In the process of building tools like KubeXray and GoCenter we learned a whole bunch. Join this talk to learn how to get started with Kubernetes and how we got started at JFrog building our new tools. After the session you will know: How we got to Kubernetes (and why we chose it)
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...
10ZiG Technology is a leading provider of endpoints for a Virtual Desktop Infrastructure environment. Our fast and reliable hardware is VMware, Citrix and Microsoft ready and designed to handle all ranges of usage - from task-based to sophisticated CAD/CAM users. 10ZiG prides itself in being one of the only companies whose sole focus is in Thin Clients and Zero Clients for VDI. This focus allows us to provide a truly unique level of personal service and customization that is a rare find in the industry. We offer a multitude of custom embedding options and hardware configurations to ensure our devices are tailor-made to fit seamlessly into the environments of our customers.