Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Article

The Next Technology Boom is Already Underway at Cisco, F5 Networks, Riverbed and VMware

Clouds, Virtualization and IT Diseconomies

Greg Ness's Blog

Infrastructure2.0 is the next technology boom. It is already underway. Cisco, F5 Networks, Riverbed and even VMware promise to benefit from this new infrastructure and the level of connectivity intelligence it promises. (More about these companies and others later in this article.)


Cloud computing has become a reality, yet the hype surrounding cloud has started to exceed the laws of physics and economics. The robust cloud (of all software on demand that will replace the enterprise data center) will crash into some of the same barriers and diseconomies that are facing enterprise IT today.

Certainly there will always be a business case for elements of cloud, from Google's pre-enterprise applications to Amazon's popular services and the powerhouse of CRM, HR and other popular cloud services. Yet there are substantial economic barriers to entry based on the nature of today's static infrastructure.

We've seen this collision between new software demands and network infrastructure many times before, as it has powered generations of innovation around TCP/IP, network security and traffic management and optimization.

It has produced a lineup of successful public companies well positioned to lead the next tech boom, which may even be recession-proof. Cisco, F5 Networks, Riverbed and even VMware promise to benefit from this new infrastructure and the level of connectivity intelligence it promises. (More about these companies and others later in this article.)


Static Infrastructure meets Dynamic Systems and Endpoints

I recently wrote about clouds, networks and recessions by taking a macro perspective on the evolution of the network and a coming likely recession. I also cited virtualization security as an example of yet another big bounce between more robust systems and static infrastructure that has slowed technology adoption and created demands for newer and more sophisticated solutions.

I posited that VMware was a victim of expectations enabled by the promise of the virtualized data center muted with technological limitations its technology partners could not address quickly enough. Clearly the network infrastructure has to evolve to the next level and enable new economies of scale. And I think it will.

Until the current network evolves into a more dynamic infrastructure, all bets are off on the payoffs of pretty much every major IT initiative on the horizon today, including cost-cutting measures that would be employed in order to shrink operating costs without shrinking the network.

Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise's ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.

Networks Frequently Start with Reliance on Manual Labor

Decades ago the world's first telecom networks were simple and fairly manageable, at least by today's standards. The population of people who had telephones was lower than the population of people who today have their own blogs. Neighborhoods were also very stable and operators often personally knew many of the people they were connecting.



Those days of course are long gone, and human operators are today only involved in exceptional cases and highly-automated fee-based lookup services. The Bell System eventually automated the decisions made by those growing legions of operators, likely because scale and complexity were creating the diseconomies that larger enterprise networks are facing today. And these phone companies eventually grew into massive networks servicing more dynamic rates of change and ultimately new services. Automation was the best way to escape the escalating manual labor requirements of the growing communications network.

TCP/IP Déjà vu

A very similar scenario is playing itself out in the TCP/IP network as enterprise networks grow in size and complexity and begin handling traffic in between more dynamic systems and endpoints. The recent Computerworld survey (sponsored by Infoblox) shows larger networks paying a higher IPAM price per IP address than smaller networks. As I mentioned earlier at Archimedius, this shows clear evidence of networks growing into diseconomies of scale.

Acting on a hunch, I asked Computerworld to pull more data based on network size, and they were able to break their findings down into 3 network size categories: 1) under 1000 IP addresses; 2) 1k-10k IP addresses; and 3) more than 10k IP addresses. Because the survey was only based on about 200 interviews I couldn't break the trends down any farther without taking some statistical leaps with small samples.

Consider what it takes to keep a device connected to an IP network and ensure that it's always findable. First, it will need an unused IP address. In a 1.0 infrastructure administrators use spreadsheets to track used and available IPs and assign them to things that are "fixed", like printers and servers.

More Stories By Greg Ness

Gregory Ness is the VP of Marketing of Vidder and has over 30 years of experience in marketing technology, B2B and consumer products and services. Prior to Vidder, he was VP of Marketing at cloud migration pioneer CloudVelox. Before CloudVelox he held marketing leadership positions at Vantage Data Centers, Infoblox (BLOX), BlueLane Technologies (VMW), Redline Networks (JNPR), IntruVert (INTC) and ShoreTel (SHOR). He has a BA from Reed College and an MA from The University of Texas at Austin. He has spoken on virtualization, networking, security and cloud computing topics at numerous conferences including CiscoLive, Interop and Future in Review.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in this new hybrid and dynamic environment.
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, Alex Lovell-Troy, Director of Solutions Engineering at Pythian, presented a roadmap that can be leveraged by any organization to plan, analyze, evaluate, and execute on moving from configuration management tools to cloud orchestration tools. He also addressed the three major cloud vendors as well as some tools that will work with any cloud.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.