Welcome!

@CloudExpo Authors: Pat Romanski, Liz McMillan, Kevin Benedict, Elizabeth White, Peter Davidson

Related Topics: @CloudExpo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Post

IT’s Third Platform By @PlexxiInc [#SDN #BigData]

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends

With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT — the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data (read: Big Data) and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments — applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.

Scale up or scale out?
When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors in ASICs. It has, however, become a fairly loosely used way to think about performance over time.

Of course, as the need for compute resources has skyrocketed, the key to solving the compute scaling problem wasn’t in creating chips with faster and faster clock rates. Rather, to get big, the compute industry went small through the introduction of multicore processors.

The true path to scaling was found in distribution. By creating many smaller cores, workloads could be distributed and worked on in parallel to create lower compute times. And the ability to push workloads out to large numbers of CPU cores means that the amount of compute power could be scaled up by fanning workloads out. Essentially, this is the premise behind scale-out architectures.

A point that gets lost in all of this was that simply putting a multicore processor on a server didn’t mean that scale automatically came. In fact, if the application itself was not changed, it would run neatly in one of however many cores were on the CPU. To take advantage of this scaling out of compute, the applications themselves had to go through a transformation.

The same story has played itself out on the storage side. The premise behind Big Data applications is that volumes of data can be sharded across a number of nodes so that operations can be scaled out across a larger number of servers, each one handling a fraction of the job and completing their parts in less time. By spreading workloads out across multiple storage servers, the time it takes to fetch data and perform operations drops.

Here again, the applications themselves need to change to take advantage of the new architecture.

What is happening now?
The application space as a whole is essentially going through its own transformation. At its most basic, this means that companies are in the process of rewriting business applications to take advantage of available data and to embrace a new architecture more capable of scaling than before.

Note that an additional property of scaled out applications is that they will tend to be more resilient to failures in the infrastructure. Applications written expressly for scale-out environments tend to be designed not with the goal of eliminating failures but rather making failures transparent to the rest of the applications. By replicating data in multiple places (across servers and across racks, for instance), Big Data applications are less reliant on individual servers or switches.

But the application evolution isn’t just confined to companies with large enterprise applications. An entire industry has emerged to support a transition from multi-tiered applications to the current generation of flat, scaled out applications pioneered by the likes of Facebook and Google. If initiatives like Mesos and Docker are any indication, the future of high-performance applications will exist only in distributed environments, with operating system and toolkit support.

Where is the network in all of this?
Overlooked in the transition is the network. For decades, the network has been built to be intentionally agnostic to what is running on top of it. While there have been intermittent periods where network-aware and application-aware have been bandied about, the majority of networking since its inception has been an exercise in creating connectivity by providing bandwidth uniformly between any endpoints on the network.

The entire premise to scaling out is providing workload capacity in small chunks and then distributing applications across those chunks. In the case of compute, this is done by creating many small cores (or VMs) and then moving the applications to the available compute. In the case of storage, storage and processing capacity is spread across a number of nodes, and application workloads are distributed to free capacity. In each of these cases, small blocks of capacity are created and the application workloads are moved to them.

How should the model for networking evolve? If scalable solutions all have the property that application workloads are moved to where capacity is present, then networking needs to go through some fairly foundational changes.

Networking today is based on a set of pathing algorithms that date back more than 50 years. The whole of networking is based on Shortest Path First algorithms that essentially reduce paths in the network to the set of paths that have the shortest number of hops. There might be hundreds of ways to get from point A to point B, but the network will only use the subset that have the shortest possible path. Technologies like Equal Cost Multi Pathing (ECMP) then load balance traffic across paths with the same number of hops.

If the objective is to identify where there is capacity and push application flows across the least congested links, there will need to be fundamental changes in how networking functions to account for non-equal-cost multi pathing (that is, fanning traffic across all available links).

Next-generation application requirements
If the Third Platform era is characterized by a new generation of applications, understanding what those applications require will determine how infrastructure in support of those applications must evolve.

Third Era applications have the following properties:

  • Horizontally-scaled - Applications will tend to be based on scale-out architectures
  • Agile – With an eye towards facilitating service management, interactions (from provisioning to troubleshooting) will be highly automated—across infrastructure silos
  • Integrated – To achieve the performance and scale required, compute, storage, networking, and the application will all be integrated
  • Resilient – Distributed applications will not be designed for infrastructure uptime but rather for overall application resiliency (fault tolerant, not fault free)
  • Security – With data underpinning many of these applications, security and compliance (along with auditability) will be key
  • These properties will determine how each of compute, storage, and networking must evolve.

Scale-out networking
The network that supports scale-out applications will itself be based on a scale-out architecture. The key property to scale out is less about the ultimate scale and more about the path to scale. If applications scale up by adding application instances (on servers, on VMs, or in containers), then the supporting infrastructure gets activated by enabling additional capacity as needed.

There are two facets here that are important:

Graceful addition of new capacity - Because application capacity will be turned up as needed, the requisite infrastructure capacity must be easy to add. Additional servers should be added without significant re-architecture efforts, storage servers must be added without re-designing the cluster, and network capacity must be added without going through a massive datacenter deployment exercise. For leaf-spine architectures, growth occurs through step-function-like scaling. When the number of access ports requires the addition of a new spine switch, for example, the entire architecture must be revisited and every device re-cabled. This incurs a significantly longer delay than either the compute or storage equivalents. A next-generation network designed with the graceful addition of new capacity in mind would allow for non disruptive capacity additions.

Scale down capacity when it is not neededWhile most of scaling discussions are focused on scaling up, it is equally important to scale down. For instance, if a specific application requires less capacity at certain times or under certain conditions, the supporting infrastructure must be capable of redeploying that capacity to other applications or tenants as it makes sense. Traditional networking using leaf-spine architectures uniformly distributes capacity regardless of application requirements. Next-generation network architectures should be able to dynamically allocate capacity when it is not required. This means leveraging technologies like WDM, which allows capacity to be treated as a fluid resource that can be applied using programmatic controls from a central management point.

It is probably worth adding an element here about how compute and storage will scale out. If a new resource is added in a different physical location, the role of the network is not just to have the requisite capacity but also to make that resource look as if it is physically adjacent to the other resources. Scaling out is not just about capacity then; it is also about providing high-bandwidth, low-latency connectivity such that data locality is a less stringent requirement than otherwise. This means that resources can be across the datacenter, across a metro area, or across a continent.

Agility
Agility, put simply, is about making change faster. The notion that you can plan your architecture years in advance and then allow the system to simply run is just no longer true. When applications and the data they use are dynamic, change is a certainty. The question is: how do you deal with that change?

There are two ways to deal with change: automate what you can, and make everything else easier.

The currency of automation is data. To be automated, data must be shared between systems in a programmatic way. Automation is, in many ways, a byproduct of effective integration, but integration is not by itself the entire story. To automate, you must understand workflows—how things are actually done. This is an exercise in understanding how information is layered based on frame of reference. When there is an issue related to a web server, automation is more about collecting all data related to that web server across all infrastructure than taking some action. The challenge in automating things is knowing what action to take, not reducing the keystrokes it takes to execute the command.

It is impossible to automate everything, so the remaining elements of the network must be flexible and more wieldy than we have become accustomed to in networking. This means simplifying architectures so that we are deploying fewer devices and ports. It means reducing the number of control points in the network so we have fewer places to go to make changes. And it means implicitly handling behaviors that have traditionally been managed through pinpoint control over thousands of configuration knobs.

Integrated infrastructure
The future of IT infrastructure is based not on silos of compute, storage, and networking capacity but on the various subsystems working together to deliver application workloads. Accordingly, solutions that service the Third Platform era will either be tightly-integrated solutions from a single vendor, or collections of components that are explicitly designed to be integrated.

In the case of the former, the concern for customers is the pricing impacts of an integrated solution. Vertical stacks are inherently more difficult to displace, which means that incumbency creates a strong barrier to adoption, which will have the tendency to drive pricing higher (noting that there is already a lot of pressure to push pricing down).

In the case of the latter, the integration will need to be more than testing things alongside each other. This is not about the coexistence of equipment but rather the intelligent interaction of devices from different classes of vendor. From a Plexxi perspective, this is why efforts like the Data Services Engine (DSE) are important. They provide a means of integrating, but more importantly, they provide a framework that is easily extensible to other infrastructure so that the future is always integratable. Additionally, this integration layer is open source, so the likelihood of lock-in is significantly lower.

Resilience
The next-generation platform is resilient. Rather than designing for correctness and relying on a never-ending battery of tests to ensure efficacy, infrastructure is constructed using building blocks that are themselves resilient to failures and tolerant to issues in either the hardware or the software. From a compute perspective, this means having the ability to fail to other application instances or containers. For storage, this means replicating data across multiple servers in multiple racks. For networking, this is all about having multiple paths between endpoints so that if one goes down, resources are not stranded.

With resiliency built in via Plexxi’s inherent optical path diversity, the emphasis shifts to failure detection and failover. Path pre-calculation is a major player in making sure that failover and convergence times stay low.

Over time, resilience will include pushes into DevOps-style deployment models where the infrastructure is treated as a single system image that is qualified before changes are deployed. This will require integration with DevOps tools—not just tools like Chef and Ansible but also tools like Jenkins and Git.

Security
Security is about keeping data secure, not just keeping equipment secure. This means that traffic must be isolated where necessary, auditable when required, and ultimately managed as a collection of tenant and application flows that can be treated individually as their payloads require. To get differentiated service, there will need to be policy abstraction that dictates the workload requirements for individual tenants and applications. For instance, if a workload requires special treatment, it can be flagged and redirected as needed.

The post The requirements for IT’s Third Platform appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud com...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads. It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business...
The Internet giants are fully embracing AI. All the services they offer to their customers are aimed at drawing a map of the world with the data they get. The AIs from these companies are used to build disruptive approaches that cannot be used by established enterprises, which are threatened by these disruptions. However, most leaders underestimate the effect this will have on their businesses. In his session at 21st Cloud Expo, Rene Buest, Director Market Research & Technology Evangelism at Ar...
SYS-CON Events announced today that GrapeUp, the leading provider of rapid product development at the speed of business, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market acr...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that Ayehu will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara California. Ayehu provides IT Process Automation & Orchestration solutions for IT and Security professionals to identify and resolve critical incidents and enable rapid containment, eradication, and recovery from cyber security breaches. Ayehu provides customers greater control over IT infras...
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA