Welcome!

@CloudExpo Authors: Liz McMillan, Roger Strukhoff, Pat Romanski, Zakia Bouachraoui, Dana Gardner

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Post

We Need More Peer-to-Peer Shared Cloud Infrastructure

Why should I move IT infrastructure to the cloud when I already have invested in my own data center?

I recently participated in a panel discussion, titled: "Infrastructure as a Service (IaaS) endgame - commoditizing the data center" at the GigaOM Structure 2012 Conference. While software is definitely defining the use of IaaS through Platform as a Service (PaaS), the underlying need for IaaS does not go away. In my opinion, the current pace of infrastructure and data center build out cannot be maintained, leading to a future need of resource sharing in this space.

Resource sharing includes everything that contributes to a data center, especially the physical infrastructure. Several Software as a Service (SaaS) providers, especially in the collaboration space, innovatively optimize their infrastructure to serve their specialized purpose. Facebook, for example, has open sourced their hardware design through Open Compute. Skype, as you know, routes calls and video through their users' bandwidth, reducing the need for back-end infrastructure.

With an increasing focus by companies on environmental responsibility, it would be much better to productively re-use physical data centers (and leverage what we've already got) rather than convert them into basketball courts like I have seen in a recent HP advertisement.

Cloud computing is evolving, and, with the transition, many companies are asking this question: Why should I move IT infrastructure to the cloud when I already have invested in my own data center? Existing data centers are equipped with a lot of capabilities required for high availability, like uninterrupted power supply (UPS) and network bandwidth. Companies moving existing workloads to the cloud may find more capacity than they need in the data centers they already own. At the same time, more cloud service providers are looking for space to expand their capacity for new customers.

This can provide an opportunity for everyone to improve resource usage.

As an analogy to what I'm talking about, we can look to power companies. These utilities have been around a long time, supplying power to customers via a massive electricity grid. Increasingly, there is significant investment going into financing solar panels and windmills that can contribute excess capacity back to the grid. In fact, this is becoming popular among both businesses and consumers, who can reduce their power bills and earn credits.

Why can't we apply this same concept to cloud computing, where everyone "contributes" to the grid?

My proposition is for cloud service providers to not invest in new infrastructure but to evaluate the option of leveraging existing data center facilities. In an environment of cloud computing driven by software management, cloud service providers could control the usage of IT resources from multiple locations, improving the value proposition.

Service providers that already provide peer-to-peer sharing capabilities include Spotify, Freenet, Gnutella, BitTorrent and Kazaa - but these are very consumer-focused solutions. Compute sharing enabled by BOINC, the open-source software for volunteer computing and grid computing, powers a number of grids, the largest being World Community Grid primarily targeted to home users. Many of us have also provided home computer resources to help SETI in its search for extraterrestrial life in the universe.

We need to take these consumer examples and apply them to the business market. If IaaS can be considered to consist for the most part of compute, storage and network resources, I find it devoid of solutions available to businesses, with the exception of storage sharing by Symform. There is a wide open opportunity for software innovation to enable network and compute sharing in the enterprise space that would be very attractive not just for costs but also to enable us to truly "go green".

Cloud service providers need to target complete IaaS sharing by leveraging existing data center resources. This will enable infrastructure re-use and also encourage large data centers to embrace the cloud computing paradigm shift without wasting existing investments. We could see mini-Amazons in place enabled by cloud service providers while improving high-availability. I foresee several companies building on the storage sharing concept pioneered by Symform to include all IaaS components.

The main challenge is to complete the software components capable of managing shared resources in a secure manner (and most likely an open API infrastructure) required to pull this off.

•   •   •

This post was first published on Symform.com. Republished with permission.

More Stories By Larry Carvalho

Larry Carvalho runs Robust Cloud LLC, an advisory services company helping various ecosystem players develop a strategy to take advantage of cloud computing. As the 2010-12 Instructor of Cloud Expo's popular Cloud Computing Bootcamp, he has already led the bootcamp in New York, Silicon Valley, and Prague, receiving strong positive feedback from attendees about the value gained at these events. Carvalho has facilitated all-day sessions at customer locations to set a clear roadmap and gain consensus among attendees on strategy and product direction. He has participated in multiple discussion panels focused on cloud computing trends at information technology events, and he has delivered all-day cloud computing training to customers in conjunction with CloudCamps. To date, his role has taken him to clients in three continents.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives. Serverless and Kubernetes are great examples of continuous, rapid pace of change in enterprise IT. They also raise a number of critical issues and questions about employee training, development processes, and operational metrics. There's a real need for serious conversations about Serverless and Kubernetes among the people who are doing this work and managing it. So we are very pleased today to announce the ServerlessSUMMIT at CloudEXPO.
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
This month @nodexl announced that ServerlessSUMMIT & DevOpsSUMMIT own the world's top three most influential Kubernetes domains which are more influential than LinkedIn, Twitter, YouTube, Medium, Infoworld and Microsoft combined. NodeXL is a template for Microsoft® Excel® (2007, 2010, 2013 and 2016) on Windows (XP, Vista, 7, 8, 10) that lets you enter a network edge list into a workbook, click a button, see a network graph, and get a detailed summary report, all in the familiar environment of the Excel® spreadsheet application. A collection of network maps and reports created with NodeXL can be seen in the NodeXL Graph Gallery, an archive of data sets uploaded by the NodeXL user community.
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.