Welcome!

@CloudExpo Authors: Elizabeth White, Shelly Palmer, Pat Romanski, Karthick Viswanathan, Liz McMillan

Related Topics: @CloudExpo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Post

IT’s Third Platform By @PlexxiInc [#SDN #BigData]

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends

With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT — the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data (read: Big Data) and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments — applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.

Scale up or scale out?
When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors in ASICs. It has, however, become a fairly loosely used way to think about performance over time.

Of course, as the need for compute resources has skyrocketed, the key to solving the compute scaling problem wasn’t in creating chips with faster and faster clock rates. Rather, to get big, the compute industry went small through the introduction of multicore processors.

The true path to scaling was found in distribution. By creating many smaller cores, workloads could be distributed and worked on in parallel to create lower compute times. And the ability to push workloads out to large numbers of CPU cores means that the amount of compute power could be scaled up by fanning workloads out. Essentially, this is the premise behind scale-out architectures.

A point that gets lost in all of this was that simply putting a multicore processor on a server didn’t mean that scale automatically came. In fact, if the application itself was not changed, it would run neatly in one of however many cores were on the CPU. To take advantage of this scaling out of compute, the applications themselves had to go through a transformation.

The same story has played itself out on the storage side. The premise behind Big Data applications is that volumes of data can be sharded across a number of nodes so that operations can be scaled out across a larger number of servers, each one handling a fraction of the job and completing their parts in less time. By spreading workloads out across multiple storage servers, the time it takes to fetch data and perform operations drops.

Here again, the applications themselves need to change to take advantage of the new architecture.

What is happening now?
The application space as a whole is essentially going through its own transformation. At its most basic, this means that companies are in the process of rewriting business applications to take advantage of available data and to embrace a new architecture more capable of scaling than before.

Note that an additional property of scaled out applications is that they will tend to be more resilient to failures in the infrastructure. Applications written expressly for scale-out environments tend to be designed not with the goal of eliminating failures but rather making failures transparent to the rest of the applications. By replicating data in multiple places (across servers and across racks, for instance), Big Data applications are less reliant on individual servers or switches.

But the application evolution isn’t just confined to companies with large enterprise applications. An entire industry has emerged to support a transition from multi-tiered applications to the current generation of flat, scaled out applications pioneered by the likes of Facebook and Google. If initiatives like Mesos and Docker are any indication, the future of high-performance applications will exist only in distributed environments, with operating system and toolkit support.

Where is the network in all of this?
Overlooked in the transition is the network. For decades, the network has been built to be intentionally agnostic to what is running on top of it. While there have been intermittent periods where network-aware and application-aware have been bandied about, the majority of networking since its inception has been an exercise in creating connectivity by providing bandwidth uniformly between any endpoints on the network.

The entire premise to scaling out is providing workload capacity in small chunks and then distributing applications across those chunks. In the case of compute, this is done by creating many small cores (or VMs) and then moving the applications to the available compute. In the case of storage, storage and processing capacity is spread across a number of nodes, and application workloads are distributed to free capacity. In each of these cases, small blocks of capacity are created and the application workloads are moved to them.

How should the model for networking evolve? If scalable solutions all have the property that application workloads are moved to where capacity is present, then networking needs to go through some fairly foundational changes.

Networking today is based on a set of pathing algorithms that date back more than 50 years. The whole of networking is based on Shortest Path First algorithms that essentially reduce paths in the network to the set of paths that have the shortest number of hops. There might be hundreds of ways to get from point A to point B, but the network will only use the subset that have the shortest possible path. Technologies like Equal Cost Multi Pathing (ECMP) then load balance traffic across paths with the same number of hops.

If the objective is to identify where there is capacity and push application flows across the least congested links, there will need to be fundamental changes in how networking functions to account for non-equal-cost multi pathing (that is, fanning traffic across all available links).

Next-generation application requirements
If the Third Platform era is characterized by a new generation of applications, understanding what those applications require will determine how infrastructure in support of those applications must evolve.

Third Era applications have the following properties:

  • Horizontally-scaled - Applications will tend to be based on scale-out architectures
  • Agile – With an eye towards facilitating service management, interactions (from provisioning to troubleshooting) will be highly automated—across infrastructure silos
  • Integrated – To achieve the performance and scale required, compute, storage, networking, and the application will all be integrated
  • Resilient – Distributed applications will not be designed for infrastructure uptime but rather for overall application resiliency (fault tolerant, not fault free)
  • Security – With data underpinning many of these applications, security and compliance (along with auditability) will be key
  • These properties will determine how each of compute, storage, and networking must evolve.

Scale-out networking
The network that supports scale-out applications will itself be based on a scale-out architecture. The key property to scale out is less about the ultimate scale and more about the path to scale. If applications scale up by adding application instances (on servers, on VMs, or in containers), then the supporting infrastructure gets activated by enabling additional capacity as needed.

There are two facets here that are important:

Graceful addition of new capacity - Because application capacity will be turned up as needed, the requisite infrastructure capacity must be easy to add. Additional servers should be added without significant re-architecture efforts, storage servers must be added without re-designing the cluster, and network capacity must be added without going through a massive datacenter deployment exercise. For leaf-spine architectures, growth occurs through step-function-like scaling. When the number of access ports requires the addition of a new spine switch, for example, the entire architecture must be revisited and every device re-cabled. This incurs a significantly longer delay than either the compute or storage equivalents. A next-generation network designed with the graceful addition of new capacity in mind would allow for non disruptive capacity additions.

Scale down capacity when it is not neededWhile most of scaling discussions are focused on scaling up, it is equally important to scale down. For instance, if a specific application requires less capacity at certain times or under certain conditions, the supporting infrastructure must be capable of redeploying that capacity to other applications or tenants as it makes sense. Traditional networking using leaf-spine architectures uniformly distributes capacity regardless of application requirements. Next-generation network architectures should be able to dynamically allocate capacity when it is not required. This means leveraging technologies like WDM, which allows capacity to be treated as a fluid resource that can be applied using programmatic controls from a central management point.

It is probably worth adding an element here about how compute and storage will scale out. If a new resource is added in a different physical location, the role of the network is not just to have the requisite capacity but also to make that resource look as if it is physically adjacent to the other resources. Scaling out is not just about capacity then; it is also about providing high-bandwidth, low-latency connectivity such that data locality is a less stringent requirement than otherwise. This means that resources can be across the datacenter, across a metro area, or across a continent.

Agility
Agility, put simply, is about making change faster. The notion that you can plan your architecture years in advance and then allow the system to simply run is just no longer true. When applications and the data they use are dynamic, change is a certainty. The question is: how do you deal with that change?

There are two ways to deal with change: automate what you can, and make everything else easier.

The currency of automation is data. To be automated, data must be shared between systems in a programmatic way. Automation is, in many ways, a byproduct of effective integration, but integration is not by itself the entire story. To automate, you must understand workflows—how things are actually done. This is an exercise in understanding how information is layered based on frame of reference. When there is an issue related to a web server, automation is more about collecting all data related to that web server across all infrastructure than taking some action. The challenge in automating things is knowing what action to take, not reducing the keystrokes it takes to execute the command.

It is impossible to automate everything, so the remaining elements of the network must be flexible and more wieldy than we have become accustomed to in networking. This means simplifying architectures so that we are deploying fewer devices and ports. It means reducing the number of control points in the network so we have fewer places to go to make changes. And it means implicitly handling behaviors that have traditionally been managed through pinpoint control over thousands of configuration knobs.

Integrated infrastructure
The future of IT infrastructure is based not on silos of compute, storage, and networking capacity but on the various subsystems working together to deliver application workloads. Accordingly, solutions that service the Third Platform era will either be tightly-integrated solutions from a single vendor, or collections of components that are explicitly designed to be integrated.

In the case of the former, the concern for customers is the pricing impacts of an integrated solution. Vertical stacks are inherently more difficult to displace, which means that incumbency creates a strong barrier to adoption, which will have the tendency to drive pricing higher (noting that there is already a lot of pressure to push pricing down).

In the case of the latter, the integration will need to be more than testing things alongside each other. This is not about the coexistence of equipment but rather the intelligent interaction of devices from different classes of vendor. From a Plexxi perspective, this is why efforts like the Data Services Engine (DSE) are important. They provide a means of integrating, but more importantly, they provide a framework that is easily extensible to other infrastructure so that the future is always integratable. Additionally, this integration layer is open source, so the likelihood of lock-in is significantly lower.

Resilience
The next-generation platform is resilient. Rather than designing for correctness and relying on a never-ending battery of tests to ensure efficacy, infrastructure is constructed using building blocks that are themselves resilient to failures and tolerant to issues in either the hardware or the software. From a compute perspective, this means having the ability to fail to other application instances or containers. For storage, this means replicating data across multiple servers in multiple racks. For networking, this is all about having multiple paths between endpoints so that if one goes down, resources are not stranded.

With resiliency built in via Plexxi’s inherent optical path diversity, the emphasis shifts to failure detection and failover. Path pre-calculation is a major player in making sure that failover and convergence times stay low.

Over time, resilience will include pushes into DevOps-style deployment models where the infrastructure is treated as a single system image that is qualified before changes are deployed. This will require integration with DevOps tools—not just tools like Chef and Ansible but also tools like Jenkins and Git.

Security
Security is about keeping data secure, not just keeping equipment secure. This means that traffic must be isolated where necessary, auditable when required, and ultimately managed as a collection of tenant and application flows that can be treated individually as their payloads require. To get differentiated service, there will need to be policy abstraction that dictates the workload requirements for individual tenants and applications. For instance, if a workload requires special treatment, it can be flagged and redirected as needed.

The post The requirements for IT’s Third Platform appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Today traditional IT approaches leverage well-architected compute/networking domains to control what applications can access what data, and how. DevOps includes rapid application development/deployment leveraging concepts like containerization, third-party sourced applications and databases. Such applications need access to production data for its test and iteration cycles. Data Security? That sounds like a roadblock to DevOps vs. protecting the crown jewels to those in IT.
SYS-CON Events announced today that Interface Corporation will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Interface Corporation is a company developing, manufacturing and marketing high quality and wide variety of industrial computers and interface modules such as PCIs and PCI express. For more information, visit http://www.i...
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
What is the best strategy for selecting the right offshore company for your business? In his session at 21st Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, will discuss the things to look for - positive and negative - in evaluating your options. He will also discuss how to maximize productivity with your offshore developers. Before you start your search, clearly understand your business needs and how that impacts software choices.
In his session at @ThingsExpo, Greg Gorman is the Director, IoT Developer Ecosystem, Watson IoT, will provide a short tutorial on Node-RED, a Node.js-based programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using a wide range of nodes in the palette that can be deployed to its runtime in a single-click. There is a large library of contributed nodes that help so...
IBM helps FinTechs and financial services companies build and monetize cognitive-enabled financial services apps quickly and at scale. Hosted on IBM Bluemix, IBM’s platform builds in customer insights, regulatory compliance analytics and security to help reduce development time and testing. In his session at 21st Cloud Expo, Lennart Frantzell, a Developer Advocate with IBM, will discuss how these tools simplify the time-consuming tasks of selection, mapping and data integration, allowing devel...
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness.
SYS-CON Events announced today that Mobile Create USA will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Mobile Create USA Inc. is an MVNO-based business model that uses portable communication devices and cellular-based infrastructure in the development, sales, operation and mobile communications systems incorporating GPS capabi...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, will discuss how data centers of the future will be managed, how th...
Why Federal cloud? What is in Federal Clouds and integrations? This session will identify the process and the FedRAMP initiative. But is it sufficient? What is the remedy for keeping abreast of cutting-edge technology? In his session at 21st Cloud Expo, Rasananda Behera will examine the proposed solutions: Private or public or hybrid cloud Responsible governing bodies How can we accomplish?
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that Keisoku Research Consultant Co. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Keisoku Research Consultant, Co. offers research and consulting in a wide range of civil engineering-related fields from information construction to preservation of cultural properties. For more information, vi...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
SYS-CON Events announced today that Enroute Lab will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enroute Lab is an industrial design, research and development company of unmanned robotic vehicle system. For more information, please visit http://elab.co.jp/.