Welcome!

@CloudExpo Authors: Zakia Bouachraoui, Elizabeth White, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: @CloudExpo

@CloudExpo: Interview

Cloud Datacenters: 4 Questions for Uptime Institute

Digital Infrastructure VP Steve Carter Tracks Progress of Public & Private Clouds

With the global growth of Cloud Computing solutions and the datacenters that support them, it seemed like a good time to fire a few questions to the Uptime Institute, which leads the global conversation about datacenters through its certifications, consulting, research, and educational programs.

So here are four questions for Steve Carter, VP of Digital Infrastructure Services at the Institute..

1. How are datacenters becoming more efficient? What are the major strategies being used to maximize processing power while trying to keep cooling costs under control?

Steve Carter: There are two strategies that should be part of any good datacenter efficiency improvement effort.

The first is reducing IT's electrical load by improving utilization of IT systems on a per-server basis. The second is improving, that is, lowering, the amount of overhead power required for the mechanical and electrical systems that support the IT load.

Our Digital Infrastructure Services clients that currently average 30% virtualization across their distributed systems can realize a 2:1 payback on money spent for 3-year transformation projects that significantly reduce future IT total electrical loads

2. How important will latency and related issues be to datacenters? That is, what is the potential for datacenters to serve customers beyond their national and even continental borders?

Steve: Several large global companies have successfully consolidated datacenters in single geographical regions. Significant efforts were required to test and deploy application environments that are more tolerant to global latency issues.  Many legacy application environments must be replaced by web services type environments that allow global consolidations.

Consolidation of data centers allowed these global clients to reduce the total number of datacenter sites requiring global network connectivity.  The savings realized in reducing the number of datacenter network connectivity concentration sites allows for increasing bandwidth of the fewer numbers of circuits required.  Often the reduction of total quantities of the global circuits allowed these companies to dramatically increase bandwidth of the remaining circuits at a lower total global cost.

3. I've been guilty of equating "datacenter hosting" with "cloud computing," even though that's not always the case. What percentage of hosted datacenter services will be focused on cloud computing over the next few years?

Steve: Adoption of public, outsourced cloud services will be utilized at different rates by industry sectors and their maturity.  New upstart companies that do not have legacy infrastructures have very high percentages of public cloud deployments. On the other end of the spectrum, financial services organizations will be much slower to implement public cloud services.

I believe that public cloud adoption will follow trends that we observed for virtualization from 2006 till the present.  Areas such as application development environments were among the first environments to be virtualized in quantity.  I think we are seeing this trend develop for public cloud as well.

Applying cloud technologies within private datacenters is a trend that is gaining momentum. We have clients that are already in the development and test phases of transforming their client facing web services environments from traditional architectures & infrastructures to private cloud environments.

4. To what degree do economies of scale start to apply to datacenters? That is, even with so much offsite cloud computing, there will be local, company-owned datacenters for many more decades, I would assume. Most of these would be smaller than large, hosted plants, right? So how important are economies of scale, and what can companies do to ensure their local datacenter is as optimized and efficient as possible?

Steve: I believe that private datacenters can benefit significantly by utilizing the basic approaches utilized by datacenter service providers.

Service providers clearly understand their infrastructure CAPEX and OPEX costs for every square foot of space, every kW of power added and every BTU of cooling required.  This is not always true of private datacenter owners.  Understanding the true costs associated with any new added infrastructure requirement is necessary to effectively manage a datacenter at any scale.

Private datacenters greatly benefit by clearly understanding how every additional kW of IT load impacts their CAPEX and OPEX performance.  Services providers must clearly understand these basic financial facts if they are to remain in business.

More Stories By Roger Strukhoff

Roger Strukhoff (@IoT2040) is Executive Director of the Tau Institute for Global ICT Research, with offices in Illinois and Manila. He is Conference Chair of @CloudExpo & @ThingsExpo, and Editor of SYS-CON Media's CloudComputing BigData & IoT Journals. He holds a BA from Knox College & conducted MBA studies at CSU-East Bay.

CloudEXPO Stories
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one bottle of the oil for commercial purpose. But from its medical properties to its culinary importance, skin lightening, moisturizing, and protection abilities, down to its extraordinary hair care properties, prickly seed oil has got lots of excellent rewards for anyone who pays the price.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected path for IoT innovators to scale globally, and the smartest path to cross-device synergy in an instrumented, connected world.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of the world’s largest data centers. Click here to schedule a meeting with our experts and executives.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understanding as the environment changes.