|By Lori MacVittie||
|April 25, 2014 09:00 AM EDT||
Way back in the early days of the Internet scalability was an issue (the more things change...). One of the answers to this problem was to scale out web servers using a fairly well-proven concept called load balancing. Simply put, distribute the load across web servers to make sure everyone gets served in a timely fashion. We see this in action at stores every day when more checkout lines are added as demand increases. Well, we hope we see this in action. Too often we don't, much to our chagrin.
Anyway, the way in which early load balancing worked was simply to take a couple variables (IP address and TCP port) and then hash them together and stick them in the equivalent of a queue for a web server. Because hash values tend to distribute fairly evenly, this worked well (until we ran into the mega-proxy issue, thanks to folks like Compuserve and AOL).
This is called "network load balancing" because, well, it uses network variables to distribute load. It's quite fast, actually, because it's based on variables that are in fixed locations within a single packet: source or destination IP and TCP port. All the work is on the ingress, on the inbound side, and once the decision has been made it's a pretty simple thing to hash future packets and match it up before sending it on its way. Voila. Network load balancing.
Application load balancing, however, arose because network load balancing was all based on inbound variables. It couldn't take into consideration how loaded the chosen server was, or whether its response time was falling within acceptable business parameters, or whether it was at capacity or not. Those variables were all on the server side, and required visibility into the application, not the client.
It also couldn't account for the fact that virtual servers were popping up everywhere (multiple applications served from the same IP address and port) and forced the web server to become a load balancer itself. Which, if you think about it, was kind of crazy. If a single server couldn't scale well enough to meet demand, how is putting a single server in front of them going to help the situation?
Application load balancing (which has also been given other fancy names over the years like content switching or routing, application switching, application or page routing, etc...) is really focused on distributing load across applications intelligently. While it can use ingress variables like IP address and port, it generally doesn't because that doesn't offer the insight into which server (application, web, virtual, whatever) is going to be able to respond (has capacity) in a time frame acceptable to the business (response time) for a specific application (or piece of the application like images).
The difference between the two lies primarily in the variables used to distribute load. Network load balancing relies solely on network variables while Application load balancing relies mainly on application variables.
This change in load balancing techniques opened up all sorts of new efficiencies and scalability options because it allowed architectures to specialize - route requests for images to servers focused on serving images, requests for static content to servers focused on serving static content, etc...). It also enabled persistence (sticky sessions) which greatly accelerated the ability to scale out stateful applications in a web format.
Why Is It Important to SDN?
The reason this is important to SDN architectures is because layer 3 switches can, in fact, support network load balancing. Fairly easily, in fact. If you look at how Link Aggregation (trunking) is implemented in most switches, you'll see it's using network load balancing techniques to distribute load across trunked links and that the algorithms used are pretty much the same ones we used back in the day to load balancing servers based on network variables. The hash is pretty simple (and easily implemented) and doesn't require storing state because the hash is always based on the same variables, easily extracted from IP and TCP headers, and don't really tax the system. Forwarding tables are basically sets of inbound IP addresses, TCP ports and (switch) ports matched to outbound IP addresses, TCP ports and (switch) ports. So you can see that network load balancing wouldn't overly tax a controller (it just has to hash the right values and insert a forwarding entry) or a switch.
But it wouldn't be application centric, or be able to take into consideration things that modern load balancing services care about - like application status, connection capacity, and response times, not to mention enabling specialization of services. But in order to be application centric application load balancing must participate in the data path and have visibility into variables that aren't available in packets - they're in payloads and in the application server (instances) itself. Like the implications of being stateful versus stateless, the burden on a centralized controller would be overwhelming.
Thus while SDN principles are certainly applicable, the same architecture used to implement SDN for lower order network layer services is not going to be the same architecture used to implement SDN for higher order network layer services. When evaluating SDN solutions, it's again important to consider how any two SDN network (core and application) architectures complement one another, integrate with one another, and collaborate to enable a complete software-defined network architecture that supports the unique needs of both layer 2-3 and layer 4-7.
November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Penta Security is a leading vendor for data security solutions, including its encryption solution, D’Amo. By using FPE technology, D’Amo allows for the implementation of encryption technology to sensitive data fields without modification to schema in the database environment. With businesses having their data become increasingly more complicated in their mission-critical applications (such as ERP, CRM, HRM), continued ...
Oct. 24, 2016 08:45 PM EDT Reads: 1,024
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...
Oct. 24, 2016 08:30 PM EDT Reads: 1,322
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Oct. 24, 2016 08:15 PM EDT Reads: 2,577
Established in 1998, Calsoft is a leading software product engineering Services Company specializing in Storage, Networking, Virtualization and Cloud business verticals. Calsoft provides End-to-End Product Development, Quality Assurance Sustenance, Solution Engineering and Professional Services expertise to assist customers in achieving their product development and business goals. The company's deep domain knowledge of Storage, Virtualization, Networking and Cloud verticals helps in delivering ...
Oct. 24, 2016 07:15 PM EDT Reads: 1,047
SYS-CON Events announced today that Cloudbric, a leading website security provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Cloudbric is an elite full service website protection solution specifically designed for IT novices, entrepreneurs, and small and medium businesses. First launched in 2015, Cloudbric is based on the enterprise level Web Application Firewall by Penta Security Sys...
Oct. 24, 2016 07:15 PM EDT Reads: 1,161
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Oct. 24, 2016 06:15 PM EDT Reads: 4,749
In the next five to ten years, millions, if not billions of things will become smarter. This smartness goes beyond connected things in our homes like the fridge, thermostat and fancy lighting, and into heavily regulated industries including aerospace, pharmaceutical/medical devices and energy. “Smartness” will embed itself within individual products that are part of our daily lives. We will engage with smart products - learning from them, informing them, and communicating with them. Smart produc...
Oct. 24, 2016 05:45 PM EDT Reads: 1,497
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Oct. 24, 2016 05:00 PM EDT Reads: 3,633
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
Oct. 24, 2016 05:00 PM EDT Reads: 3,901
SYS-CON Events announced today that Coalfire will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Coalfire is the trusted leader in cybersecurity risk management and compliance services. Coalfire integrates advisory and technical assessments and recommendations to the corporate directors, executives, boards, and IT organizations for global brands and organizations in the technology, cloud, health...
Oct. 24, 2016 04:45 PM EDT Reads: 1,568
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, will contrast how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He will show the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He will also have live demos of building immutable pipe...
Oct. 24, 2016 04:30 PM EDT Reads: 1,591
SYS-CON Events announced today that Transparent Cloud Computing (T-Cloud) Consortium will exhibit at the 19th International Cloud Expo®, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The Transparent Cloud Computing Consortium (T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data proces...
Oct. 24, 2016 04:30 PM EDT Reads: 1,366
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
Oct. 24, 2016 04:00 PM EDT Reads: 3,726
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Oct. 24, 2016 03:45 PM EDT Reads: 1,405
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
Oct. 24, 2016 02:15 PM EDT Reads: 3,962
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
Oct. 24, 2016 02:15 PM EDT Reads: 4,024
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
Oct. 24, 2016 02:00 PM EDT Reads: 34,158
Governments around the world are adopting Safe Harbor privacy provisions to protect customer data from leaving sovereign territories. Increasingly, global companies are required to create new instances of their server clusters in multiple countries to keep abreast of these new Safe Harbor laws. Is it worth it? In his session at 19th Cloud Expo, Adam Rogers, Managing Director of Anexia, Inc., will discuss how to keep your data legal and still stay in business.
Oct. 24, 2016 01:15 PM EDT Reads: 1,456
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
Oct. 24, 2016 01:00 PM EDT Reads: 865
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
Oct. 24, 2016 01:00 PM EDT Reads: 908