Welcome!

@CloudExpo Authors: Liz McMillan, Dalibor Siroky, James Carlini, John Walsh, APM Blog

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Lessons from the Amazon Cloud Outage

Best Practices for Resilient Cloud Applications

As reported in SYS-CON and elsewhere, we found the Amazon's cloud crashed, taking sites like Reddit, Foursquare, Quora, Hootsuite, Indaba, GroupMe, Scvngr, Motherboard.tv and few more down with it.

As reported several components of Amazon Cloud portfolio like, EC2, Elastic Block Store (EBS), Relational Database Service (RDS), Elastic Beastalk, CloudFormation and lately MapReduce were all impacted.

Amazon has given the following explanation for the crash at this time:

"A networking event triggered a large amount of re-mirroring of EBS [Extended Block Store] volumes ... This re-mirroring created a shortage of capacity ... which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes."

While this issue will be solved for now, this has created a huge impact on the Cloud Adoption for the large enterprises. However, the traditional high availability best practices always hold good for Cloud also and this issue cannot be seen as the failure of Cloud, rather more on the implementation. The following best practices will guard the cloud applications on top of the out of the box high availability options provided by the Cloud Provider like Amazon.

Ensure Application Controlled Scalability
We have got components like Auto Scaling, Elastic Load Balancing and Cloud Watch etc... These will help the scalability by monitoring the resource usage and automatically allocate new instances.

However this is achieved, if the application is aware of its usage and scales accordingly.

One such implementation pattern is a Routing Server, where the application characteristic like the type of the user, geography or the kind of transaction determines the target destination to be process the request and load balance accordingly.

Making the data aware scaling rules configurable without restarting the servers will go a long way in adjusting the routing mechanism to specific servers in case of regions or availability Zones are down due to the unknown reasons. This will also ensure that scalability rules can be dynamically altered in cases of catastrophic situations, such that some high priority transactions can continue to be served and low-priority transactions can be put on hold.

Stay Disconnected
Even though the typical application consists of multiple logical and physical components, it is best to decouple each of these components, so that each layer interacts with the   next layer in an asynchronous manner.

While there are some applications like banking, stock trading and online reservation which requires real time and stay connected nature, most applications in today's scenario can still take the advantage of a disconnected architecture.

Use reliable messaging and request / response framework so that the end users are never aware that their request is queued rather they get a feeling that their request is taken care and got a satisfactory response. This will ensure that even if some physical servers or logical components are down we can still not impact the end user.

Keep Transactions Smaller
The best path to ensure transparent application fail over and recoverability is to ensure that the transactions are as small as possible, and each step ensures a logically meaningful step within the overall process from an end-user perspective.

Remember some legacy applications of the previous era, which accepts transaction data for several fields and pages and used to have a Single SAVE button, and if anything happens, the end user lost all the data requiring to be re-entered, this needs to be avoided at all cost and the systems should be designed to be a combination of logically smaller steps that tied together in a loose coupled manner.

VEET: The User Entered Data
In a disconnected environment, end users are not there to fix the data entry errors or provide additional information, so that best fault tolerant systems are designed when the user is made to enter minimal data and the pattern of VEET (Validate Extract Enrich Transform) is applied to the user data.

Validate: Once the transaction inputs are entered and accepted, they stay as a meaningful information across the system components and no need to correct any data.

Extract: Never accept the information which can be derived, this will ensure that the errors are avoided on the known data.

Enrich: Accumulate the information from the existing information, so that the information need not be entered by the user. For example if the user enters the zipcode, the City, state and other information can automatically be retrieved.

Transform: Transform from one form to another form as it is meaningful to the system flow.

The above steps ensure that we can recover gracefully from failures, which will be transparent to the user.

Keep the Backup Data to the Lowest Granularity for Recovery
We have seen the storage mechanisms like Amazon EBS (Amazon Elastic Block Store) have in built fault tolerant mechanism such that volumes are replicated automatically. This is a very good feature. But more the data is backup as a raw volumes, we should also think about the ability to quickly recover and get going in case of disasters.

Typically database instances take some time to recover the pending transactions or to roll back the unfinished ones, proper backup mechanisms can help to recover from this scenario quickly.

The following options can be considered in the order to quickly recover from a disaster scenario.

Alternative Write Mechanism: Whether a log shipping or stand by database or simply mirror the data to other availability zones is one of the best mechanism to keep the databases in sync and quickly recover when one zone is not available.

Implicit Raw Volume Backups: This is employed out of the box in most of the cloud platform, however the intelligence to quickly recover the raw volumes with automated scripts should be in place.

Share Nothing
From the Amazon experience it is clear that in spite of the best availability mechanisms adopted by the Cloud provider, rarely we may end up in few availability zones struck by disaster.

However, during these scenarios we wanted to ensure that not all our users are affected, but only the minimal number of users. This can be achieved by adopting the ‘Shared Nothing Pattern' so that tenants are logically and physically separate within the Cloud Eco System.

This will ensure that the failure of part of the infrastructure will not affect everyone in the system.

Summary
The Amazon Cloud down event is a wake-up call about how Cloud can be utilized. There is no automatic switch that ensures all the fault tolerant needed for the systems. However, this has reinforced the strong fundamental principles with which applications needs to be built in order to be resilient. This incident cannot be seen as a failure of the Cloud platform itself and we have lot of room for improvement and to avoid these situations in future.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).

@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...