Welcome!

@CloudExpo Authors: Harry Trott, Liz McMillan, Mamoon Yunus, Lisa Calkins, Pat Romanski

Related Topics: @ThingsExpo, Java IoT, @CloudExpo, @BigDataExpo, SDN Journal, FinTech Journal

@ThingsExpo: Blog Feed Post

PubSub and Other Disruptive Emerging Network Models

What's new with software-defined architectures is not just the logical separation but the physical decoupling

When people write about software-defined architectures being "disruptive" to the network they're doing a bit of a disservice to just how much change is occurring under the hood in the engine that drives today's businesses. The notion of separating control and data planes is superficial in that it describes a general concept and it isn't really all that radical a change, if you think about it.

The control and data planes have always been separate. We have, since the need for web-scale networks came about, implemented separate topological (and usually physical) networks specifically for the purpose of segregating control traffic from the data path. The reasons for this are many: to keep management (control) traffic from interfering with the delivery of applications (and vice versa), to enable a model in which control over the critical path for applications could be secured and to ensure access to necessary control functions in the face of failure or attack.

What's new with software-defined architectures is not just the logical separation but the physical decoupling and change in component responsibility. In traditional networks there is a logically separate control plane, but it is distributed; it resides on each physical component. In software-defined networks it is physically separate but it is centralized; control responsibility resides in a single component, the "controller".

Now, OpenStack and emerging models for scaling systems that will be responsible for managing communication with and for the Internet of Things (like MQTT) use a similar control model, but it's not as active as a software-defined architectural model, it's more a passive model. That's the nature of PubSub imposing itself on the network.

PubSub Control Modelpubsub-model

PubSub (publish / subscribe) is a familiar model to application developers. It's a middleware staple that's been used for a very long time to distribute messages to a variable set of systems. In a nutshell, PubSub is based on the notion of there existing a "queue" to which authorized components can publish events or messages of interest and to which interested components an subscribe. Events or messages have a life (like a TTL) and eventually expire. In the interim, it's expected that subscribed components check the queue for messages periodically. They poll for events or messages.

The queue itself is much like a switch or router's queue, except messages in the queue are duplicated until the TTL runs out and the message expires.

PubSub is passive; that is, it does not actively distribute messages. It merely serves as a kind of centralized repository, making available to those components that need it access to relevant information about the state of applications and/or the network.

Centralized Control Model

push-modelOpenFlow-based SDN, by contrast, is active. That is, it not only serves as a centralized repository for the state of applications and/or the network, but it actively distributes messages to components based on events. For example, a controller might receive a message from another system indicating the launch of a new application instance. That event triggers a series of actions on the controller that includes informing the affected network components of configuration changes. In a passive, PubSub model, the network components themselves might be polling for such an event and, upon receiving one, would initiate the appropriate configuration changes themselves.

We can simplify the description of the differences even more: a controller-based architecture uses a push model, while a pubsub-based architecture uses a pull model. What this doesn't illustrate well is that in a push model, the centralized controller must know how to communicate the desired changes to each and every component it is controlling. That's one of the reasons original models standardized on OpenFlow and were tightly focused on L2-4 stateless networking. It could be easily standardized down to a common forwarding table.While different components might internalize that differently, the basic information was always the same: IP addresses, ports and actions.

As we move up the stack into L4-7 stateful networking, however, this model becomes more burdensome because of the complexity of rule sets and differences in policy models across such a broad set of networking domains. Hence the plug-in support in controllers like OpenDaylight for "other" control protocols. But the basic premise of the model remains the same, regardless of the control protocol: the centralized controller dictates the changes to all components. It pushes those changes to the network. Both control and execution are centralized. The controller tells components to change their configuration.

PubSub centralizes control but decentralizes execution. The control plane is still centralized; there is one authoritative system responsible for disseminating change across the network, but each individual component (or domain controller but we'll get to that in a minute) is responsible for executing the appropriate changes based on their configured policies and services. A pubsub controller never tells a component "change this now"; that's up to the individual components (or domain controller).

The Integrated Control Model

To make things even more confusing (and disruptive), these models may be used simultaneously. A software-defined architecture might be based on a centralized control model with domain controllers for specific networking functions (like security and application delivery) integrated via a pubsub-based model.

integrated-model

This is where we start seeing models that combine emerging technologies like OpenStack and SDN architectures together. OpenStack manages at the data center level, and at its heart is pubsub model that can be used by domain controllers (stateless L2-4 SDN, stateful L4-7 SDN, etc...) to receive notification of changes in the network and subsequently push those changes using the appropriate control protocols to the components it is managing.

Needless to say, the term "disruptive" is really inadequate in describing the level of change in the network required to support either models (or both). Both require significant changes not just to the network itself but the way in which the network is fundamentally provisioned and managed. It's not just a new CLI or management console, these models dramatically change the design and management of networks.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS. In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availa...
For financial firms, the cloud is going to increasingly become a crucial part of dealing with customers over the next five years and beyond, particularly with the growing use and acceptance of virtual currencies. There are new data storage paradigms on the horizon that will deliver secure solutions for storing and moving sensitive financial data around the world without touching terrestrial networks. In his session at 20th Cloud Expo, Cliff Beek, President of Cloud Constellation Corporation, d...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...
Intelligent Automation is now one of the key business imperatives for CIOs and CISOs impacting all areas of business today. In his session at 21st Cloud Expo, Brian Boeggeman, VP Alliances & Partnerships at Ayehu, will talk about how business value is created and delivered through intelligent automation to today’s enterprises. The open ecosystem platform approach toward Intelligent Automation that Ayehu delivers to the market is core to enabling the creation of the self-driving enterprise.
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
SYS-CON Events announced today that Massive Networks will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Massive Networks mission is simple. To help your business operate seamlessly with fast, reliable, and secure internet and network solutions. Improve your customer's experience with outstanding connections to your cloud.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, will provide a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to ...
From 2013, NTT Communications has been providing cPaaS service, SkyWay. Its customer’s expectations for leveraging WebRTC technology are not only typical real-time communication use cases such as Web conference, remote education, but also IoT use cases such as remote camera monitoring, smart-glass, and robotic. Because of this, NTT Communications has numerous IoT business use-cases that its customers are developing on top of PaaS. WebRTC will lead IoT businesses to be more innovative and address...