Welcome!

@CloudExpo Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, Ravi Rajamiyer

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

Putting a Price on Uptime

How do you put a price on uptime and more importantly, who should pay for it?

A lack of ability in the cloud to distinguish illegitimate from legitimate requests could lead to unanticipated costs in the wake of an attack. How do you put a price on uptime and more importantly, who should pay for it?

A “Perfect Cloud”, in my opinion, would be one in which the cloud provider’s infrastructure intelligently manages availability and performance such that when it’s necessary new instances of an application are launched to ensure meeting the customer’s defined performance and availability thresholds. You know, on-demand scalability that requires no manual intervention. It just “happens” the way it should.

Several providers have all the components necessary to achieve a “perfect cloud” implementation, though at the nonce it may require that customers specifically subscribe to one or more services necessary. For example, if you combine Amazon EC2 with Amazon ELB, Cloud Watch, and Auto Scaling, you’ve pretty much got the components necessary for a perfect cloud environment: automated scalability based on real-time performance and availability of your EC2 deployed application.

Cool, right?

Absolutely. Except when something nasty happens and your application automatically scales itself up to serve…no one.


AUTOMATIC REACTIONS CAN BE GOOD – AND BAD


BitBucket’s recent experience with DDoS shows that no security infrastructure is perfect; there’s always a chance that something will sneak by the layers of defense put into place by IT whether that’s in the local data center or in a cloud environment. The difference is in how the infrastructure reacts, and what it costs the customer.

Now, a DDoS such as the one that apparently targeted BitBucket was a UDP-based attack, meaning it was designed to flood the network and infrastructure and not the application. It was trying to interrupt service by chewing up bandwidth and resources on the infrastructure. Other types of DDoS, like a Layer 7 DDoS, specifically attack the application, which could potentially consume its resources which in turn triggers the automatic scaling processes which could result in a whole lot of money being thrown out the nearest window.

Consider the scenario:

  1. An application is deployed in the cloud. The cloud is configured to automatically scale up (launch additional instances) based on response time thresholds.
  2. A Layer 7 DDoS is launched against the application. Layer 7 DDoS is difficult to detect and prevent, and without the proper infrastructure in place it is unlikely to be detected by the infrastructure and even less likely to be detected by the application.
  3. The DDoS consumes all the resources on the application instance, degrading response time, so the infrastructure launches a second instance, and requests are load balanced across both application instances.
  4. The DDoS attack now automatically targets two application instances, and continues to consume resources until the infrastructure detects degradation beyond specified thresholds and automatically triggers the launch of another instance.
  5. Wash. Rinse. Repeat.

How many instances would need to be launched before it was noticed by a human being and it was realized that the “users” were really miscreants?

More importantly for the customer, how much would such an attack cost them?


THIS SOUNDS LIKE A JOB FOR CONTEXTUALLY-AWARE INFRASTRUCTURE


The reason the perfect cloud is potentially a danger to the customer’s budget is that it currently lacks the context necessary to distinguish good requests from bad requests. Cloud today, and most environments if we’re honest, lack the ability to examine requests in the context of the big picture. That is, it doesn’t look at a single request as part of a larger set of requests, it treats each one individually as a unique request requiring service by an application.

Without the awareness of the context in which such requests are made, the cloud infrastructure is incapable of detecting and preventing attacks that could  potentially lead to customer’s incurring costs well beyond what they expected to incur. The cost of an attack in the local data center might be a loss of availability, an application might crash and require the poor guy on call to come in and deal with the situation, but in terms of monetary costs it is virtually “free” to the organization, excepting the potential loss of revenue from customers unable to buy widgets who refuse to return later.

contextBut in the cloud, this lack of context could be financially devastating. An attack moves at the speed of the Internet, and a perfect cloud is hopefully designed to react just as quickly. Just how many instances would be launch – incurring costs to the customer – before such an attack was detected? For all the monitoring offered by providers today it’s not clear whether any of them can discern and attack scenario from a seasonal rush of traffic, and it’s further not clear what the infrastructure would do about it if it could.

And once we add in the concept of intercloud, this situation could get downright ugly. The premise is that if an application is unavailable at cloud provider X according to the customer’s defined thresholds, that requests would be directed to another instance of the application in another cloud, and maybe even a third cloud. How many cloud deployed versions of an application could potentially be affected by a single, well-executed attack? The costs and reach of such a scenario boggle the mind.

My definition of a perfect cloud, methinks, needs to be adjusted slightly. A perfect cloud, therefore, in addition to its ability to automatically scale an application to meet demand must also be able to discern between illegitimate and legitimate users and provide the means by which illegitimate requests are ignored while legitimate requests are processed and only scaling when legitimate volumes of requests require such.

 


PUTTING A PRICE ON UPTIME


The question I think many people have, I know I certainly do, is who pays for the resulting cost of such an attack?

 

It’s often been said that it’s difficult if not impossible to put a price on downtime, but what about uptime? What about the cost incurred by the launch of additional instances of an application in the face of an attack? An attack that cannot be reasonably detected by an application? An attack that is clearly the responsibility of the infrastructure to detect and prevent; the infrastructure over which the customer, by definition and design, has no control?

Who should pay for that? The customer, as a price of deploying applications in the cloud, or the provider, as a penalty for failing to provide a robust enough infrastructure to prevent it?

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
SYS-CON Events announced today that Nihon Micron will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nihon Micron Co., Ltd. strives for technological innovation to establish high-density, high-precision processing technology for providing printed circuit board and metal mount RFID tags used for communication devices. For more inf...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that Suzuki Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Suzuki Inc. is a semiconductor-related business, including sales of consuming parts, parts repair, and maintenance for semiconductor manufacturing machines, etc. It is also a health care business providing experimental research for...
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Cloud-based disaster recovery is critical to any production environment and is a high priority for many enterprise organizations today. Nearly 40% of organizations have had to execute their BCDR plan due to a service disruption in the past two years. Zerto on IBM Cloud offer VMware and Microsoft customers simple, automated recovery of on-premise VMware and Microsoft workloads to IBM Cloud data centers.
Why Federal cloud? What is in Federal Clouds and integrations? This session will identify the process and the FedRAMP initiative. But is it sufficient? What is the remedy for keeping abreast of cutting-edge technology? In his session at 21st Cloud Expo, Rasananda Behera will examine the proposed solutions: Private or public or hybrid cloud Responsible governing bodies How can we accomplish?
Today traditional IT approaches leverage well-architected compute/networking domains to control what applications can access what data, and how. DevOps includes rapid application development/deployment leveraging concepts like containerization, third-party sourced applications and databases. Such applications need access to production data for its test and iteration cycles. Data Security? That sounds like a roadblock to DevOps vs. protecting the crown jewels to those in IT.
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness.
Elon Musk is among the notable industry figures who worries about the power of AI to destroy rather than help society. Mark Zuckerberg, on the other hand, embraces all that is going on. AI is most powerful when deployed across the vast networks being built for Internets of Things in the manufacturing, transportation and logistics, retail, healthcare, government and other sectors. Is AI transforming IoT for the good or the bad? Do we need to worry about its potential destructive power? Or will we...
In his session at @ThingsExpo, Greg Gorman is the Director, IoT Developer Ecosystem, Watson IoT, will provide a short tutorial on Node-RED, a Node.js-based programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using a wide range of nodes in the palette that can be deployed to its runtime in a single-click. There is a large library of contributed nodes that help so...
IBM helps FinTechs and financial services companies build and monetize cognitive-enabled financial services apps quickly and at scale. Hosted on IBM Bluemix, IBM’s platform builds in customer insights, regulatory compliance analytics and security to help reduce development time and testing. In his session at 21st Cloud Expo, Lennart Frantzell, a Developer Advocate with IBM, will discuss how these tools simplify the time-consuming tasks of selection, mapping and data integration, allowing devel...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
SYS-CON Events announced today that B2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. B2Cloud specializes in IoT devices for preventive and predictive maintenance in any kind of equipment retrieving data like Energy consumption, working time, temperature, humidity, pressure, etc.
What is the best strategy for selecting the right offshore company for your business? In his session at 21st Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, will discuss the things to look for - positive and negative - in evaluating your options. He will also discuss how to maximize productivity with your offshore developers. Before you start your search, clearly understand your business needs and how that impacts software choices.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp em...
SYS-CON Events announced today that SIGMA Corporation will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. uLaser flow inspection device from the Japanese top share to Global Standard! Then, make the best use of data to flip to next page. For more information, visit http://www.sigma-k.co.jp/en/.