Welcome!

@CloudExpo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Microservices Expo, Agile Computing

Containers Expo Blog: Blog Feed Post

Infrastructure 2.0: Squishy Name for a Squishy Concept

It remains, as James Urquhart put it recently, a “squishy term”

There’s been increasing interest in Infrastructure 2.0 of late that’s encouraging to those of us who’ve been, well, pushing it uphill against the focus on cloud computing and virtualization for quite some time now.

What’s been the most frustrating about bringing this concept to awareness has been that cloud computing is one of the most tangible examples of both what infrastructure 2.0 is and what it can do and virtualization is certainly one of the larger technological drivers of Infrastructure 2.0 capable solutions today. So despite the frustration associated with cloud computing and virtualization stealing the stage, as it were, the spotlight is certainly helping to bring the issues which Infrastructure 2.0 is attempting to address into the fore. As it gains traction, one of the first challenges that must be addressed is to define what it is we mean when we say “Infrastructure 2.0.”

Like Web 2.0 – go ahead and try to define it simply – Infrastructure 2.0 remains, as James Urquhart put it recently, a “squishy term.”

James Urquhart in “Understanding Infrastructure 2.0”:

blockquote Right now, Infrastructure 2.0 is one of those "squishy" terms that can potentially incorporate a lot of different network automation characteristics. As is hinted at in the introduction to Ness' interview, there is a working group of network luminaries trying to sort out the details and propose an architectural framework, but we are still very early in the game. [link to referenced interview added]

What complicates Infrastructure 2.0 is that not only is the term “squishy” but so is the very concept. After all, Infrastructure 2.0 is mostly about collaboration, about integration, about intelligence. These are not off the shelf “solutions” but rather enabling technologies that are designed to drive the flexibility and agility of enterprise networks forward in a such as way as to alleviate the pain points associated with the brittle, fragile network architectures of the past.

Greg Ness summed it the concept, at least, very well more than a year ago in “The beginning of the end of static infrastructure” when he said, “The issue comes contextdown to static infrastructure incapable of keeping up with all of the new IP addresses and devices and initiatives and movement/change already taking place in large enterprises” and then noted that “the notion of application, endpoint and network intelligence thus far has been hamstrung by the lack of dynamic connectivity, or connectivity intelligence.”

What Greg noticed is missing is context, and perhaps even more importantly the ability to share that context across the entire infrastructure.  I could, and have, gone on and on and on about this subject so for now I’ll just stop and offer up a few links to some of the insightful posts that shed more light on Infrastructure 2.0 – its drivers, its requirements, its breadth of applicability, and its goals - to date:

James believes "Infrastructure 2.0" will “evolve into a body of standards that will have the same impact as BGP or DNS” and I share that belief. The trick is going to be in developing standards that allow for the “squishiness” that is required to remain flexible and adaptable across myriad architectures and environments while being able to standardize how that happens.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Most modern computer languages embed a lot of metadata in their application. We show how this goldmine of data from a runtime environment like production or staging can be used to increase profits. Adi conceptualized the Crosscode platform after spending over 25 years working for large enterprise companies like HP, Cisco, IBM, UHG and personally experiencing the challenges that prevent companies from quickly making changes to their technology, due to the complexity of their enterprise. An accomplished expert in Enterprise Architecture, Adi has also served as CxO advisor to numerous Fortune executives.
Cloud computing is a goal aspired to by all organizations, yet those in regulated industries and many public sector organizations are challenged in adopting cloud technologies. The ability to use modern application development capabilities such as containers, serverless computing, platform-based services, IoT and others are potentially of great benefit for these organizations but doing so in a public cloud-consistent way is the challenge.
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.