Welcome!

@CloudExpo Authors: Elizabeth White, Rene Buest, Liz McMillan, Mehdi Daoudi, Astadia CloudGPS

Related Topics: @CloudExpo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Post

IT’s Third Platform By @PlexxiInc [#SDN #BigData]

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends

With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT — the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data (read: Big Data) and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments — applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.

Scale up or scale out?
When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors in ASICs. It has, however, become a fairly loosely used way to think about performance over time.

Of course, as the need for compute resources has skyrocketed, the key to solving the compute scaling problem wasn’t in creating chips with faster and faster clock rates. Rather, to get big, the compute industry went small through the introduction of multicore processors.

The true path to scaling was found in distribution. By creating many smaller cores, workloads could be distributed and worked on in parallel to create lower compute times. And the ability to push workloads out to large numbers of CPU cores means that the amount of compute power could be scaled up by fanning workloads out. Essentially, this is the premise behind scale-out architectures.

A point that gets lost in all of this was that simply putting a multicore processor on a server didn’t mean that scale automatically came. In fact, if the application itself was not changed, it would run neatly in one of however many cores were on the CPU. To take advantage of this scaling out of compute, the applications themselves had to go through a transformation.

The same story has played itself out on the storage side. The premise behind Big Data applications is that volumes of data can be sharded across a number of nodes so that operations can be scaled out across a larger number of servers, each one handling a fraction of the job and completing their parts in less time. By spreading workloads out across multiple storage servers, the time it takes to fetch data and perform operations drops.

Here again, the applications themselves need to change to take advantage of the new architecture.

What is happening now?
The application space as a whole is essentially going through its own transformation. At its most basic, this means that companies are in the process of rewriting business applications to take advantage of available data and to embrace a new architecture more capable of scaling than before.

Note that an additional property of scaled out applications is that they will tend to be more resilient to failures in the infrastructure. Applications written expressly for scale-out environments tend to be designed not with the goal of eliminating failures but rather making failures transparent to the rest of the applications. By replicating data in multiple places (across servers and across racks, for instance), Big Data applications are less reliant on individual servers or switches.

But the application evolution isn’t just confined to companies with large enterprise applications. An entire industry has emerged to support a transition from multi-tiered applications to the current generation of flat, scaled out applications pioneered by the likes of Facebook and Google. If initiatives like Mesos and Docker are any indication, the future of high-performance applications will exist only in distributed environments, with operating system and toolkit support.

Where is the network in all of this?
Overlooked in the transition is the network. For decades, the network has been built to be intentionally agnostic to what is running on top of it. While there have been intermittent periods where network-aware and application-aware have been bandied about, the majority of networking since its inception has been an exercise in creating connectivity by providing bandwidth uniformly between any endpoints on the network.

The entire premise to scaling out is providing workload capacity in small chunks and then distributing applications across those chunks. In the case of compute, this is done by creating many small cores (or VMs) and then moving the applications to the available compute. In the case of storage, storage and processing capacity is spread across a number of nodes, and application workloads are distributed to free capacity. In each of these cases, small blocks of capacity are created and the application workloads are moved to them.

How should the model for networking evolve? If scalable solutions all have the property that application workloads are moved to where capacity is present, then networking needs to go through some fairly foundational changes.

Networking today is based on a set of pathing algorithms that date back more than 50 years. The whole of networking is based on Shortest Path First algorithms that essentially reduce paths in the network to the set of paths that have the shortest number of hops. There might be hundreds of ways to get from point A to point B, but the network will only use the subset that have the shortest possible path. Technologies like Equal Cost Multi Pathing (ECMP) then load balance traffic across paths with the same number of hops.

If the objective is to identify where there is capacity and push application flows across the least congested links, there will need to be fundamental changes in how networking functions to account for non-equal-cost multi pathing (that is, fanning traffic across all available links).

Next-generation application requirements
If the Third Platform era is characterized by a new generation of applications, understanding what those applications require will determine how infrastructure in support of those applications must evolve.

Third Era applications have the following properties:

  • Horizontally-scaled - Applications will tend to be based on scale-out architectures
  • Agile – With an eye towards facilitating service management, interactions (from provisioning to troubleshooting) will be highly automated—across infrastructure silos
  • Integrated – To achieve the performance and scale required, compute, storage, networking, and the application will all be integrated
  • Resilient – Distributed applications will not be designed for infrastructure uptime but rather for overall application resiliency (fault tolerant, not fault free)
  • Security – With data underpinning many of these applications, security and compliance (along with auditability) will be key
  • These properties will determine how each of compute, storage, and networking must evolve.

Scale-out networking
The network that supports scale-out applications will itself be based on a scale-out architecture. The key property to scale out is less about the ultimate scale and more about the path to scale. If applications scale up by adding application instances (on servers, on VMs, or in containers), then the supporting infrastructure gets activated by enabling additional capacity as needed.

There are two facets here that are important:

Graceful addition of new capacity - Because application capacity will be turned up as needed, the requisite infrastructure capacity must be easy to add. Additional servers should be added without significant re-architecture efforts, storage servers must be added without re-designing the cluster, and network capacity must be added without going through a massive datacenter deployment exercise. For leaf-spine architectures, growth occurs through step-function-like scaling. When the number of access ports requires the addition of a new spine switch, for example, the entire architecture must be revisited and every device re-cabled. This incurs a significantly longer delay than either the compute or storage equivalents. A next-generation network designed with the graceful addition of new capacity in mind would allow for non disruptive capacity additions.

Scale down capacity when it is not neededWhile most of scaling discussions are focused on scaling up, it is equally important to scale down. For instance, if a specific application requires less capacity at certain times or under certain conditions, the supporting infrastructure must be capable of redeploying that capacity to other applications or tenants as it makes sense. Traditional networking using leaf-spine architectures uniformly distributes capacity regardless of application requirements. Next-generation network architectures should be able to dynamically allocate capacity when it is not required. This means leveraging technologies like WDM, which allows capacity to be treated as a fluid resource that can be applied using programmatic controls from a central management point.

It is probably worth adding an element here about how compute and storage will scale out. If a new resource is added in a different physical location, the role of the network is not just to have the requisite capacity but also to make that resource look as if it is physically adjacent to the other resources. Scaling out is not just about capacity then; it is also about providing high-bandwidth, low-latency connectivity such that data locality is a less stringent requirement than otherwise. This means that resources can be across the datacenter, across a metro area, or across a continent.

Agility
Agility, put simply, is about making change faster. The notion that you can plan your architecture years in advance and then allow the system to simply run is just no longer true. When applications and the data they use are dynamic, change is a certainty. The question is: how do you deal with that change?

There are two ways to deal with change: automate what you can, and make everything else easier.

The currency of automation is data. To be automated, data must be shared between systems in a programmatic way. Automation is, in many ways, a byproduct of effective integration, but integration is not by itself the entire story. To automate, you must understand workflows—how things are actually done. This is an exercise in understanding how information is layered based on frame of reference. When there is an issue related to a web server, automation is more about collecting all data related to that web server across all infrastructure than taking some action. The challenge in automating things is knowing what action to take, not reducing the keystrokes it takes to execute the command.

It is impossible to automate everything, so the remaining elements of the network must be flexible and more wieldy than we have become accustomed to in networking. This means simplifying architectures so that we are deploying fewer devices and ports. It means reducing the number of control points in the network so we have fewer places to go to make changes. And it means implicitly handling behaviors that have traditionally been managed through pinpoint control over thousands of configuration knobs.

Integrated infrastructure
The future of IT infrastructure is based not on silos of compute, storage, and networking capacity but on the various subsystems working together to deliver application workloads. Accordingly, solutions that service the Third Platform era will either be tightly-integrated solutions from a single vendor, or collections of components that are explicitly designed to be integrated.

In the case of the former, the concern for customers is the pricing impacts of an integrated solution. Vertical stacks are inherently more difficult to displace, which means that incumbency creates a strong barrier to adoption, which will have the tendency to drive pricing higher (noting that there is already a lot of pressure to push pricing down).

In the case of the latter, the integration will need to be more than testing things alongside each other. This is not about the coexistence of equipment but rather the intelligent interaction of devices from different classes of vendor. From a Plexxi perspective, this is why efforts like the Data Services Engine (DSE) are important. They provide a means of integrating, but more importantly, they provide a framework that is easily extensible to other infrastructure so that the future is always integratable. Additionally, this integration layer is open source, so the likelihood of lock-in is significantly lower.

Resilience
The next-generation platform is resilient. Rather than designing for correctness and relying on a never-ending battery of tests to ensure efficacy, infrastructure is constructed using building blocks that are themselves resilient to failures and tolerant to issues in either the hardware or the software. From a compute perspective, this means having the ability to fail to other application instances or containers. For storage, this means replicating data across multiple servers in multiple racks. For networking, this is all about having multiple paths between endpoints so that if one goes down, resources are not stranded.

With resiliency built in via Plexxi’s inherent optical path diversity, the emphasis shifts to failure detection and failover. Path pre-calculation is a major player in making sure that failover and convergence times stay low.

Over time, resilience will include pushes into DevOps-style deployment models where the infrastructure is treated as a single system image that is qualified before changes are deployed. This will require integration with DevOps tools—not just tools like Chef and Ansible but also tools like Jenkins and Git.

Security
Security is about keeping data secure, not just keeping equipment secure. This means that traffic must be isolated where necessary, auditable when required, and ultimately managed as a collection of tenant and application flows that can be treated individually as their payloads require. To get differentiated service, there will need to be policy abstraction that dictates the workload requirements for individual tenants and applications. For instance, if a workload requires special treatment, it can be flagged and redirected as needed.

The post The requirements for IT’s Third Platform appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
"We're here to tell the world about our cloud-scale infrastructure that we have at Juniper combined with the world-class security that we put into the cloud," explained Lisa Guess, VP of Systems Engineering at Juniper Networks, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
As enterprise cloud becomes the norm, businesses and government programs must address compounded regulatory compliance related to data privacy and information protection. The most recent, Controlled Unclassified Information and the EU’s GDPR have board level implications and companies still struggle with demonstrating due diligence. Developers and DevOps leaders, as part of the pre-planning process and the associated supply chain, could benefit from updating their code libraries and design by in...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained Michael Fuhrman, Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Calligo, an innovative cloud service provider offering mid-sized companies the highest levels of data privacy and security, has been named "Bronze Sponsor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Calligo offers unparalleled application performance guarantees, commercial flexibility and a personalised support service from its globally located cloud plat...
"We are an IT services solution provider and we sell software to support those solutions. Our focus and key areas are around security, enterprise monitoring, and continuous delivery optimization," noted John Balsavage, President of A&I Solutions, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
"We are focused on SAP running in the clouds, to make this super easy because we believe in the tremendous value of those powerful worlds - SAP and the cloud," explained Frank Stienhans, CTO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We want to show that our solution is far less expensive with a much better total cost of ownership so we announced several key features. One is called geo-distributed erasure coding, another is support for KVM and we introduced a new capability called Multi-Part," explained Tim Desai, Senior Product Marketing Manager at Hitachi Data Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
SYS-CON Events announced today that Datera, that offers a radically new data management architecture, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera is transforming the traditional datacenter model through modern cloud simplicity. The technology industry is at another major inflection point. The rise of mobile, the Internet of Things, data storage and Big...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...