Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo

@CloudExpo: Blog Post

Is the Enterprise Datacenter a Dying Breed?

There is no doubt in my mind that we will continue to grow our own datacenter.

As an SDN network provider focused on the datacenter, we spend a good amount of time understanding the state of data centers today, tomorrow and some time into the future.

There is no question that the use of Software as a Service (SaaS) applications in the cloud is growing rapidly. Plexxi itself is a shining example, few of the applications we use are in-house across all functional areas.

There are many reasons why we picked cloud-based applications for our needs. As a small company, in many cases there is a very simply economic choice to make. Paying for a cloud based service is simply cheaper than building your own infrastructure. Creating a datacenter infrastructure is not cheap, and maintaining it and the applications that run on top is a serious investment. When you are small, that overhead is hard to carry and per user based charges for a cloud based application is much easier to swallow.

But as small as we are, we have clear needs for in house datacenter resources, and we are not in a very compute or storage intensive business. We have built a mini datacenter in our test environment. This is where we do our scaling tests, our integration testing with external systems, and even run big data applications as part of the test and development cycle. We have a growing environment where we validate larger and larger systems through simulation.

We are extremely focused to make sure that all our applications are as tightly integrated as they can be. We constantly chase our application providers for hooks and integrations that allow us to create a seamless environment with clear workflows from one application to another. Some of these integrations can only be done on non cloud based versions of the applications we use. Our use of some of these applications is heavy enough that access performance is becoming an issue. Productivity loss is hard to measure but very real.

There is no doubt in my mind that we will continue to grow our own datacenter. There are some things we have to run in house to ensure a controlled environment with dedicated access, others will be more hybrid with local cache and proxy versions for cloud based applications.

This week I read this article where Intel’s CIO Kim Stevenson talks about Intel’s own datacenter infrastructure. Of course Intel is somewhat unique in the sense that they create one of the most critical pieces of datacenter resources, but really they are a big multinational like so many others that have compute and storage needs for their business.

In the article, Kim articulates some of the key reasons why the enterprise datacenter will not disappear. A direct quote: “That’s because the company runs mission-critical applications for developing intellectual property, manufacturing, customer service, and product development, and thus far, these work better internally”, followed by “the company is very sensitive about its proprietary data”. In just two quotes, these are key reasons to have certain things in-house. Access, performance, flexibility, customization, security, locality. The first few will improve with better cloud environments and access to them, but those last few will have a much higher resistance.

The size of Intel’s datacenters is quite impressive. 630,000 Xeon cores across 50,000 servers. And their utilization close to 90% throughout the day. That would be one heck of a compute workload to place into the cloud. Yes, Intel is large. But there are so many others like them, some with perhaps even heavier compute and storage requirements than Intel. Large pharmaceuticals performing chemical research and analysis, oil and gas companies feeding huge amounts of data into their compute centers in search of natural resources, banks, insurance companies and credit card companies storing millions and billions of transactions and try to find patterns in an attempt to understand us better and sell us more.

There is no question that many of our applications will move to the cloud. Pure economics will drive that. But at the same time there will continue to be resistance for a long time to come to move certain applications and data into the cloud. And as Intel’s numbers show, those are very significant amounts of resources.

The enterprise datacenter will continue to exists and grow for a long time to come. Where and how we run our applications will show a shift of applications into the cloud. The boundary between local and cloud will blur, with some applications fully in the cloud, others fully local, and many in a hybrid between the two for performance, security, scaling or elasticity reasons. And it is there that we as an industry creating datacenter infrastructures need to focus.

[Today's fun fact: The 4th of July is (not surprisingly) the day with the highest hot dog consumption in the US, a staggering 150 million on that one day alone. For tomorrow, happy 4th to all in the US and a happy friday to everyone else. As for Saturday: Hup Holland Hup.]

 

The post Is the Enterprise Datacenter a Dying Breed? appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.