Welcome!

@CloudExpo Authors: Elizabeth White, John Katrick, Pat Romanski, Liz McMillan, Progress Blog

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Case (For & Against) X-Driven Scalability in Cloud Computing Environments

Examining responsibility for auto-scalability in cloud computing environments

Examining responsibility for auto-scalability in cloud computing environments.

[ If you’re coming in late, you may want to also read previous entries on the network, application, and management framework ]

Today, the argument regarding responsibility for auto-scaling in cloud computing as well as highly virtualized environments remains mostly constrained to e-mail conversations and gatherings at espresso machines. It’s an argument that needs more industry and “technology consumer” awareness, because it’s ultimately one of the underpinnings of a dynamic data center architecture; it’s the piece of the puzzle that image_thumb8_thumb_thumbmakes or breaks one of the highest value propositions of cloud computing and virtualization: scalability.

image_thumb12_thumb_thumb

The question appears to be a simple one: what component is responsible not only for recognizing the need for additional capacity, but acting on that information to actually initiate the provisioning of more capacity? Neither the answer, nor the question, it turns out are as simple as appears at first glance. There are a variety of factors that need to be considered, and each of the arguments for – and against - a specific component have considerable weight.

We’ve examined each of the three possibilities, the three “players” in the scalability game in dynamic environments: the network, the application, and the management framework. All have a piece of the puzzle, but none have both the visibility and the ability to initiate a provisioning event – at least not in a way that maintains cost and operational efficiency.

RESOLUTION: COLLABORATION

From our previous discussions it seems obvious that the application does not – and indeed cannot – be enabled with the control required to manage scalability. But the network (load balancing service) and the management framework both could, ostensibly, be enabled with the ability to control the provisioning process and imbued with the visibility into the data necessary to initiate scaling events. But just as true is that doing so, in either case, would incur serious repercussions to operational stability and potentially increase costs when the integration requirements are taken into consideration.

Thus, it seems the most efficient and cost-effective means of managing scalability in cloud computing environments is via a collaborative operational process involving all three components: application, network, and management framework.

viz-control-cloud

RESPONSIBILITY: APPLICATION

The application remains responsible for providing the per-instance capacity data required. This may be as simple as a connection or throughput high-water mark, or as complex as near-time load data. In a truly dynamic, automated data center this information would be provided by the application through some standardized mechanism, such as specific API-accessible service. Such standardization would enable portability across environments, eliminate the possibility of error on the part of the operator when communicating those limits to the load balancing service, as well as provide the means by which the application could leverage knowledge of its application infrastructure constraints to determine dynamically what those limits may be. That’s important even when excluding the possibility of inter-environment portability because it is possible that intra-environment movement across time may change the capabilities of the underlying server infrastructure such that it impacts the capacity of the application itself.

RESPONSIBILITY: NETWORK

The network, or load balancing service, remains the only point in the architecture at which application capacity is easily obtained and at which application instance capacity is monitored. This data is critical to determining at what point additional (or less) capacity may be necessary. While the load balancing service may be assigned the responsibility of notifying the management framework when more or less capacity is required, it should not be responsible for initiating the provisioning process. The integration required to do so would effectively negate many of the efficiency benefits gained by the overall scaling architecture, and is fraught with potential obstacles in the face of still-evolving management frameworks. The network is adept at managing, from a monitoring perspective, the historical and current capacity of an overall application (defined as the interface with which clients interact and the aggregation point at which multiple application instances combine to act as a single entity) but thresholds and limitations – particularly those related to costs – are not necessarily part of the overall configuration of such services, nor should they be. Such operational and business requirements are best left codified and managed by a management framework as they are unique to not only the customer but the environment in which applications are deployed. This also leaves open the possibility for cross-environment scalability, enabled by management broker-enabled frameworks and components. While delivery of cross cloud-deployed applications is certainly under the purview of the load balancing service, the provisioning of resources across those environments is not feasible.

RESPONSIBILITY: MANAGEMENT FRAMEWORK infra20def

The management framework, integrated with billing and metering and provisioning systems, is the appropriate place for initiation of provisioning events to occur. Without bogging down the infrastructure architecture – and unnecessarily complicating it, as well – it is impossible for management frameworks to efficiently gather the requisite data and make the determination whether or not to initiate a scaling event. Leaving the initiation decision to an “external” management framework has the added benefit of allowing future innovation to occur. For example, in the future it might be the case that the management framework is leveraged to offer additional services based on capacity as an alternative to more capacity. When application performance is the trigger for a scaling event, customers might one day have the option of enabling other infrastructure services – optimizations and accelerations – that can ameliorate the need for additional capacity. From a provider standpoint, increasing the revenue per instance through value-added services makes more sense – and provides a higher ROI – than simply adding additional instances. But without a management framework capable of factoring in prioritized services to the decision making process, this ability becomes a more difficult proposition.

THIS is INFRASTRUCTURE 2.0 in ACTION

You may recall that the definition of “Infrastructure 2.0” was so broad as to seem unrealizable. But when combined with the unspoken requirement for collaboration across components, such a definition is certainly not only realizable, but desirable. Infrastructure 2.0 was never about a single component being able to provide everything, it was always about enabling collaboration of the infrastructure in a manner that has been successfully carried out in the software arena to provide a highly-connected, intelligent “network” of applications.

By leveraging collaboration in the infrastructure we can achieve the goal of a dynamic data center. Whether the result is simply highly virtualized or a fully cloud-computing enabled architecture is not nearly as relevant as making the data center more operationally efficient through integration and collaboration.

The answer to the question forming the basis for this series of posts, “What component is responsible not only for recognizing the need for additional capacity, but acting on that information to actually initiate the provisioning of more capacity”, is “no single component can be responsible for both and still maintain the efficiency and performance of the environment and application.” The answer to the question is it requires an architecture; a collaborative and dynamic architecture.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
"WineSOFT is a software company making proxy server software, which is widely used in the telecommunication industry or the content delivery networks or e-commerce," explained Jonathan Ahn, COO of WineSOFT, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...