Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog, Agile Computing

@CloudExpo: Article

Recession of 2008 Makes Cloud Computing the Biggest New IT Topic

Cloud computing has replaced virtualization as the new hot topic of 2008

Greg Ness's Blog

Cloud computing has replaced virtualization as the new hot topic of 2008. Yet underneath the headlines a very basic shift is taking place in the network that promises even more conversations in the very near future. Let’s call this shift the rise of Infrastructure 2.0 or the result of escalating pressures on an already tired network infrastructure.

Over the last three decades we’ve watched a meteoric rise in processing power and intelligence in network endpoints and systems drive an incredible series of network innovations; and those innovations have led to the creation of multi-billion dollar network hardware markets. As we watch the global economy shiver and shake we now see signs of the next technology boom: Infrastructure2.0.

Infrastructure 1.0 - The Multi-billion Dollar Static Network
From the expansion of TCP/IP in the 80s/90s, the emergence of network security in the mid/late 90s to the evolution of performance and traffic optimization in the late 90s/early 00s we’ve watched the net effects of ever-changing software and system demands colliding with static infrastructure. The result has been a renaissance of sorts in the network hardware industry, as enterprises installed successive foundations of specialized gear dedicated to the secure and efficient transport of an ever increasing population of packets, protocols and services. That was and is Infrastructure1.0.

Infrastructure 1.0 made companies like Cisco, Juniper/NetScreen, F5 Networks and more recently Riverbed very successful. It established and maintained the connectivity between ever increasing global populations of increasingly powerful network-attached devices. Its impact on productivity and commerce are proportionate to the advent of oceanic shipping, paved roads and railroads, electricity and air travel. It has shifted wealth and accelerated activities on a level that perhaps has no historical precedent.

I talked about the similar potential economic impacts of cloud computing in June, comparing its future role to the shipment of spices across Asia and the Middle East before the rise of oceanic shipping. One of the key enables of cloud computing is virtualization. And our early experiences with data center virtualization have taught us plenty about the potential impact of clouds on static infrastructure. Some of these impacts will be felt on the network and others within the cloudplexes.

The market caps of Cisco, Juniper, F5, Riverbed and others will be impacted by how well they can adapt to the new dynamic demands challenging the static network.

Virtualization: The Beginning of the End of Static Infrastructure
The biggest threat to the world of multi-billion dollar Infrasructure1.0 players is neither the threat of a protracted global recession nor the emergence of a robust population of hackers threatening increasingly lucrative endpoints. The biggest threat to the static world of Infrastructure1.0 is the promise of even higher factors of change and complexity on the way as systems and endpoints continue to evolve.

More fluid and powerful systems and endpoints will require either more network intelligence or even higher enterprise spending on network management.

This became especially apparent when VMware, Microsoft, Citrix and others in virtualization announced their plans to move their offerings into production data centers and endpoints. At that point the static infrastructure world was put on notice that their habitat of static endpoints was on its way into the history books. I blogged about this, (sort of ) at Always On in February 2007 when making a point about the difficulties inherent with static network security keeping up with mobile VMs.

The sudden emergence of virtualization security marked the beginning of an even greater realization that the static infrastructure built over three decades was unprepared for supporting dynamic systems. The worlds of systems and networks were colliding again and driving new demands that would enable new solution categories.

The new chasm between static infrastructure and software now disconnected from hardware, is much broader than virtsec, and will ultimately drive the emergence of a more dynamic and resilient network, empowered by continued application layer innovations and the integration of static infrastructure with enhanced management and connectivity intelligence.

As Google, Microsoft, Amazon and others push the envelope with massive virtualization-enabled cloudplexes revitalizing small town economies -and whomever else rides the clouds- they will continue to pressure the world of Infrastructure1.0. More sophisticated systems will require more intelligent networks. That simple premise is the biggest threat today to network infrastructure players.

The market capitalizations of Cisco, Juniper, F5 and Riverbed will ultimately be tied to their ability to service more dynamic endpoints, from mobile PCs to virtualized data centers and cloudplexes. Thus far, the jury is still out about the nature and implications of various partnership announcements between 1.0 players and virtualization players.

More Stories By Greg Ness

Gregory Ness is the VP of Marketing of Vidder and has over 30 years of experience in marketing technology, B2B and consumer products and services. Prior to Vidder, he was VP of Marketing at cloud migration pioneer CloudVelox. Before CloudVelox he held marketing leadership positions at Vantage Data Centers, Infoblox (BLOX), BlueLane Technologies (VMW), Redline Networks (JNPR), IntruVert (INTC) and ShoreTel (SHOR). He has a BA from Reed College and an MA from The University of Texas at Austin. He has spoken on virtualization, networking, security and cloud computing topics at numerous conferences including CiscoLive, Interop and Future in Review.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.