Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Containers Expo Blog

@CloudExpo: Blog Feed Post

Ops First Rule

“An application is only as reliable as its least reliable component”

It’s unlikely there’s anyone in IT today that doesn’t understand the role of load balancing to scale. Whether cloud or not, load balancing is the key mechanism through which load is distributed to ensure horizontal scale of applications. It’s also unlikely there’s anyone in IT that doesn’t understand the relationship between load balancing and high-availability (reliability). High-Availability (HA) architectures are almost always implemented using load balancing services to ensure seamless transition from one service instance to another in the event of a failure.

What’s often overlooked is that scalability and HA isn’t important just for applications. Services – whether application or network-focused – must also be reliable. It’s the old “only as strong as the weakest link in the chain” argument. An application is only as reliable as its least reliable component – and that includes services and infrastructure upon which that application relies. It is – or should be – ops first rule; the rule that guides design of data center architectures.

This requirement becomes more and more obvious as emerging architectures combining the data center and cloud computing are implemented, particularly when federating identity and access services.  That’s because it is desirable to maintain control over the identity and access management processes that authenticate and authorize use of applications no matter where they may be deployed. Such an architecture relies heavily on the corporate identity store as the authoritative source of both credentials and permissions. This makes the corporate identity store a critical component in the application dependency chain, one that must necessarily be made as reliable as possible.

Which means you need load balancing.

A good example of how this architecture can be achieved is found in BIG-IP load balancing support for Microsoft’s Active Directory Federation Services (AD FS).

AD FS and F5 Load Balancing

Microsoft’s Active Directory Federation Services, (AD FS) sever role is an identity access solution that extends the single sign-on, (SSO) experience for directory-authenticated clients, (typically provided on the Intranet via Kerberos), to resources outside of the organization’s boundaries, such as cloud computing environments. To ensure high-availability, performance, and scalability the F5 BIG-IP Local Traffic Manager (LTM) can be deployed to load balance an AD FS server farm.

big-ip ad fs

There are several scenarios in which BIG-IP can load balance AD FS services.

1. To enable reliability of AD FS for internal clients accessing external resources, such as those hosted in Microsoft Office 365. This is the simplest of architectures and the most restrictive in terms of access for end-users as it is limited to only internal clients.

2. To enable reliability of AD FS and AD FS proxy servers, which provide external end-user SSO access to both internal federation-enabled resources as well as partner resources like Microsoft Office 365. This is a more flexible option as it serves both internal and external clients.

3. BIG-IP Access Policy Manager (APM) can replace the need for AD FS proxy servers required for external end-user SSO access, which eliminates another tier and enables pre-authentication at the perimeter, offering both the flexibility required (supporting both internal and external access) as well as a more secure deployment.

In all three scenarios, F5 BIG-IP serves as a strategic point of control in the architecture, assuring reliability and performance of services upon which applications are dependent, particularly those of authentication and authorization. Using BIG-IP APM instead of AD FS proxy servers both simplifies and makes more agile the architecture. This is because BIG-IP APM is inherently more programmable and flexible in terms of policy creation. BIG-IP APM, being deployed on the BIG-IP platform, can take full advantage of the context in which requests are made, ensuring that identity and access control go beyond simple credentials and take into consideration device, location, and other contextual-clues that enable a more secure system of authentication and authorization. High-availability – and ultimately scalability - is preserved for all services by leveraging the core load balancing and HA functionality of the BIG-IP platform.

All components in the chain are endowed with HA capabilities, making the entire application more resilient and able to withstand minor and major failures.

Using BIG-IP LTM for load balancing AD FS serves as an adaptable and extensible architectural foundation for a phased deployment approach. As a pilot phase, rolling out AD FS services for internal clients only makes sense, and is the simplest in terms of its implementation. Using BIG-IP as the foundation for such an architecture enables further expansion in subsequent phases, such as introducing BIG-IP APM in a phase two implementation that brings flexibility of access location to the table.

Further enhancements can then be made regarding access when context is included, enabling more complex and business-focused access policies to be implemented. Time-based restrictions on clients or location can be deployed and enforced, as is desired or needed by operations or business requirements.

Reliability is a Least Common Factor Problem

Reliability must be enabled throughout the application delivery chain to ultimately ensure reliability of each application. Scalability is further paramount for those dependent services, such as identity and access management, that are intended to be shared across multiple applications. While certainly there are many other load balancing services that could be used to enable reliability of these services, an extensible and highly scalable platform such as BIG-IP is required to ensure both reliability and scalability of shared services upon which many applications rely.

The advantage of a BIG-IP-based application delivery tier is that its core reliability and scalability services extend to any of the many services that can be deployed. By simplifying the architecture through application delivery service consolidation, organizations further enjoy the benefits of operational consistency that keeps management and maintenance costs reduced.

Reliability is a least common factor problem, and Ops First Rule should be applied when designing a deployment architecture to assure that all services in the delivery chain are as reliable as they can be.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.